The Department of Homeland Security (DHS) has made strides in developing policies and frameworks to govern its AI use, including its use in biometrics, but gaps remain in ensuring AI technologies are used responsibly and ethically, the department’s Inspector General (IG) found.
“DHS issued AI-specific guidance, appointed a Chief AI Officer, and established multiple working groups and its AI Task Force to help guide the Department’s AI efforts. However, more action is needed to ensure DHS has appropriate governance for responsible and secure use of AI,” the IG said in a new audit report.
“Without appropriate, ongoing governance of its AI, DHS faces an increased risk that its AI efforts will infringe upon the safety and rights of the American people,” the IG warned.
Biometric Update previously reported that the IG had also found significant gaps in DHS’s formal implementation plan for its AI strategy that must be addressed to fully align the department with federal guidelines to mitigate potential risks to oversight and transparency. Additionally, the Trump administration’s executive order on AI underemphasizes privacy and security issues.
The IG said “DHS established an AI strategy to guide enterprise-wide AI goals and objectives, but it did not effectively execute the strategy because it did not develop an implementation plan. Further, DHS did not have adequate governance processes to monitor AI compliance with privacy and civil rights and civil liberties requirements due to resource challenges.”
One of the most pressing issues outlined in the IG’s new audit report is the lack of adequate governance processes to monitor DHS’s compliance with privacy and civil liberties requirements. The Privacy Office (PRIV) and the Office for Civil Rights and Civil Liberties (CRCL) bear the primary responsibility for ensuring that the deployment of AI technologies does not erode privacy protections or infringe upon civil liberties. Despite these mandates, both offices face significant resource challenges that hinder their ability to effectively oversee AI implementation.
The report highlights that PRIV does not have a formal process to identify, prioritize, or monitor the closure of privacy compliance reviews (PCRs). PCRs are essential for evaluating whether DHS programs comply with privacy laws and policies, particularly in the use of technologies like AI, which inherently involve the collection and analysis of personal information. Without a structured approach to conducting these reviews, the IG determined, there is a heightened risk that privacy violations could go undetected.
Moreover, the audit found that that PRIV’s existing processes lack adequate controls to identify which PCRs are required or recommended by DHS program documentation. An internal review conducted in 2024 revealed several incomplete PCRs, including those related to generative AI tools. This oversight gap is alarming, given the sensitivity of the data handled by these AI systems and the potential for misuse.
In addition to these structural deficiencies, the CRCL has not yet implemented a formalized process to provide ongoing oversight of AI applications within DHS. Although the CRCL has developed a draft AI Risk Assessment Framework for Civil Rights and Civil Liberties, it remains unfinalized, leaving a critical void in the governance structure. This framework is intended to guide the evaluation of AI risks, particularly concerning discrimination, bias, and the infringement of civil liberties. The absence of a finalized framework means that DHS lacks a comprehensive mechanism to safeguard against these risks.
The use of AI technologies such as facial recognition (FR) and face capture (FC) further complicates the privacy landscape. These technologies are inherently privacy sensitive as they involve the biometric identification of individuals, often without their explicit consent. DHS has implemented policies to govern the use of FR and FC technologies, emphasizing the need to safeguard privacy and civil rights. However, the effectiveness of these policies is contingent on rigorous oversight and compliance monitoring, which, as the report indicates, are currently inadequate.
The report also critiques the DHS’s conditional approval process for commercial generative AI tools. While the department has established a framework to evaluate these tools’ accuracy, security, and privacy implications, the oversight mechanisms are insufficient. All DHS personnel are required to sign Rules of Behavior agreements and complete training on the responsible use of AI tools. Despite these measures, the report suggests that more robust governance is needed to ensure that these tools are used ethically and that privacy risks are effectively mitigated.
Another significant concern is the DHS’s failure to ensure comprehensive reporting of AI use cases and associated data. Federal requirements mandate that DHS report all eligible AI use cases to other government agencies and the public, providing transparency and accountability. However, the report found that DHS’s reporting processes are incomplete, lacking critical data and evidence to demonstrate compliance with federal standards. This deficiency undermines public trust and raises questions about the department’s commitment to ethical AI use.
“DHS developed processes to track and report its use of AI to the public, as required, but the processes did not identify and track some of the data needed to report the Department’s AI use cases. DHS also had limited evidence to demonstrate why it considered its AI use consistent with Federal requirements, as DHS and its components did not have a formalized process to identify, review, and validate data included in the Department’s mandated AI reporting.
The IG audit makes several recommendations to address these privacy and governance issues. Key among them is the call for the DHS Artificial Intelligence Task Force to evaluate and update the department’s AI strategy. Additionally, the Artificial Intelligence Policy Working Group is urged to develop a comprehensive AI Risk Management Framework and update existing policies to reflect the unique challenges posed by AI technologies.
To strengthen privacy oversight, the report recommends that the DHS Privacy Office enhance its PCR process, including formal tracking of required reviews and developing a methodology to prioritize discretionary PCRs. Furthermore, the CRCL is advised to finalize its AI Risk Management Framework and ensure adequate resources are allocated for ongoing oversight activities.
The IG’s findings underscore the critical need for DHS to fortify its privacy governance structures as it continues to integrate AI into its operations. Without comprehensive oversight, the IG said, there is a substantial risk that AI technologies could be used in ways that compromise individual privacy, civil liberties, and public trust.
Related Posts
Article Topics
biometric identification | biometrics | DHS | ethics | facial recognition | law enforcement | regulation | responsible AI
Latest Biometrics News
Sri Lanka’s biometric hardware market is poised for growth with the implementation of Sri Lanka Unique Digital Identity (SL-UDI) by…
Age assurance technologies are increasingly being deployed for point-of-sale use cases, and proving effective at curtailing the sale of restricted…
The wrongful-firing lawsuit filed against Amazon Web Services in the UK has gained a new life. An October dismissal of…
Only a tiny fraction of people – 0.1 percent – can accurately distinguish between real and fake content such as…