January 24, 2022


News Network

Associate Deputy Attorney General Sujit Raman Delivers Remarks at the Community Oriented Policing Services (COPS)/Police Executive Research Forum (PERF)  Facial Recognition Technology Forum

27 min read
<div>As the Nation’s primary federal law enforcement agency, the U.S. Department of Justice enforces and defends the laws of the United States; protects public safety against foreign and domestic threats; and provides national and international leadership in preventing and investigating crime. Technological innovation has created new opportunities for our law enforcement officers to effectively and efficiently tackle these important missions. At the same time, such innovation poses new challenges for ensuring that technology is used in a manner consistent with our laws and our values—and equally important, with the support and trust of the American people.</div>

“Five Principles That Inform the Justice Department’s  Use of Facial Recognition Technology”

Remarks as Prepared for Delivery

As the Nation’s primary federal law enforcement agency, the U.S. Department of Justice enforces and defends the laws of the United States; protects public safety against foreign and domestic threats; and provides national and international leadership in preventing and investigating crime.  Technological innovation has created new opportunities for our law enforcement officers to effectively and efficiently tackle these important missions.  At the same time, such innovation poses new challenges for ensuring that technology is used in a manner consistent with our laws and our values—and equally important, with the support and trust of the American people.

Facial Recognition Technology (FRT) is at the forefront of these developments.  For law enforcement, facial recognition promises a number of benefits, many of which — like providing leads for crime-solving, and reuniting missing children with their families — have been discussed throughout this forum.  When Facial Recognition Technology is used according to professional standards and within appropriate limitations, it can enable law enforcement to react quickly and effectively to emerging threats, and to conduct more efficient investigations.  It can also have hugely impactful health, business, and commercial applications, though I will not focus on those non-law enforcement uses today.

And yet, it is undisputed that certain applications of FRT raise legitimate questions about the extent to which the use of new technology can remain consistent with our society’s commitment to privacy, civil rights, and civil liberties.

I would like to use my time with you to discuss how the U.S. Department of Justice, particularly the FBI, uses Facial Recognition Technology.  While the details are important, what really matter are the underlying principles that inform our use of this technology.  The Justice Department began funding face detection and recognition research nearly a quarter-century ago.[1]  We remain committed to the federal government’s broader investment in building better algorithms, especially by capitalizing on the rapid advance of machine learning tools in the past few years.  But even as we support the cutting edge of research — and even as we embrace new capabilities that will assist us in fulfilling our duty to protect the American people — our use of Facial Recognition Technology remains fundamentally conservative.  The FBI does not, for example, use FRT for real-time identification or surveillance.  Neither does it use this technology as a means of positive identification without corroborating evidence.  A trained human being is always in the loop; the FBI uses the technology to produce investigative leads, but nothing more.

The key is trust.  And while it might seem counterintuitive in connection with dazzling, fast-moving technologies like FRT, trust often finds a refuge in the gray, bureaucratic prose of privacy impact assessments, training manuals, work logs, and compliance audits — that is, in the everyday grammar of the accountability and transparency structures that we, as a free people, demand of our government in order to preserve our liberty, ensure we are treated equitably, and promote the rule of law.  It is those structures, and the principles that give them the spark of life, that I will discuss with you today.


Facial Recognition Technology raises a number of novel questions implicating public safety, individual privacy, and technological change.  But this is not the first time our society has grappled with these types of questions.  Our Nation’s history is replete with examples where our government, our courts, and our civil society have explored how our expectations of privacy evolve along with technological innovation.

Over the past fifty years in particular, our courts have confronted a number of cases involving law enforcement’s novel use of technology to advance criminal investigations.[2]

Of course, before many of these questions came before the courts, they had been debated in State and local lawmaking bodies, as well as in the U.S. Congress.  FRT is no different.  Some State and local lawmakers already have taken steps to begin addressing this issue, and several bills have been introduced in the U.S. Congress.  Thus, while the use of Facial Recognition Technology may eventually come before the courts as a constitutional question, as citizens in a self-governing society we have an important duty to confront and to debate the complex questions that this technology raises, in the first instance.

Thanks to facial recognition, tasks that would take countless hours — like combing through large databases of photos, biometric data, or other personally identifiable information already in the government’s lawful possession — can be accomplished in a fraction of that time, at much lower cost.  It goes without saying how such advancements can assist law enforcement.  At the same time, fears of inaccuracy, the potential for unintended performance differentials reflecting gender, ethnicity, or racial characteristics, and concerns about mass surveillance have led many calls for strict regulatory action.  State and local governments across the country are in the process of determining for themselves how best to address law enforcement use of facial recognition.  Some governmental entities, including in San Francisco, Boston, Oakland, and — just last week — Portland, have gone so far as to ban local agencies’ use of the technology.

The use (or, more accurately, the misuse) of Facial Recognition Technology in other parts of the world has no doubt exacerbated concerns here in the United States.  Numerous public reports suggest, for instance, that the Chinese Communist Party takes advantage of facial recognition to assist in its oppression of the Chinese people.  Chinese officials use facial recognition in public spaces to identify and track dissidents, activists, and other individuals who are of political interest to the regime.  The Chinese government is also reportedly using FRT to suppress minorities, such as the Uyghur and Tibetan populations, in the name of “national security.”  Moreover, the use of Facial Recognition Technology by authoritarian nations like China and Russia to enforce quarantines shows how these governments justify dragnet surveillance even outside of the purported “law enforcement” and “national security” contexts, and raises important questions about the persistence of such all-encompassing surveillance measures once the pandemic has subsided.

Europe is in the midst of an intense debate about these issues.  Recently, for example, the data protection authority in Sweden fined a school for deploying Facial Recognition Technology to streamline students’ access to school facilities and monitor their attendance.  According to the data protection authority, under European privacy law, students lacked the capacity to consent to such monitoring.  Shortly thereafter, however, the same data protection authority approved police’s use of FRT to identify criminal suspects.  In France, a court similarly ruled against schools’ use of facial recognition technology — rendering this judgment against a background where the French government had announced that FRT would be the only way that its citizens could enroll in a mandatory national digital identity program to securely access public services.  (That program is currently on hold.)  Meanwhile, reports suggest that Germany is considering using live, automated FRT in train stations and airports throughout the country for security purposes, even as, just last month, the Court of Appeal of England and Wales struck down a local police force’s use of that technology, which it had openly deployed around fifty times over two years at a variety of large public events.[3]  The court’s judgment focused mostly on process errors, however, which the police force has stated it plans to fix.

Some in the European Union have called for a total ban on facial recognition technology, while others have advocated for its substantially increased deployment, in both commercial and public safety applications.  Perhaps reflecting the deep and unresolved complexities of the public policy debate concerning this technology, the European Commission’s recently-published White Paper on Artificial Intelligence makes virtually no mention of Facial Recognition Technology at all, apart from calling for “a broad European debate on the specific circumstances, if any, which might justify [its] use, and on common safeguards”[4] — a remarkable turnabout from an initial draft of the white paper, which had proposed an outright ban on the technology for between three-to-five years.

In India, the Supreme Court has struck down portions of the Aadhaar national identity system, which incorporates facial recognition among numerous other biometric ID technologies.  The court upheld the mandatory use of the system for tax purposes, but prohibited its mandatory use by commercial entities in certain contexts where privacy and security risks were involved.  Meanwhile, a widely-publicized press report from 2018 stated that the Delhi Police traced nearly 3,000 missing children in the span of a mere four days, thanks to a trial use of FRT.[5]

These are just a few examples of the varied discussions going on around the globe, and around our Nation, regarding the proper role of Facial Recognition Technology in society.  The Department of Justice recognizes the value of these discussions, and we understand the significant concerns giving rise to them.

We also strongly believe that adopting outright bans or moratoriums on the use of FRT by law enforcement in the United States is not a useful approach.  Not only do such bans deprive the American public of the clear benefits of this technology in the short term, but they also disrupt the development of better and safer ways for facial recognition to be developed, tested, and deployed in the future.  It is a false dichotomy to think we have to choose between embracing this emerging technology and abandoning our moral compass.  To the contrary, “[w]e can advance emerging technology in a way that reflects our values of freedom, human rights, and respect for human dignity.”[6]  The United States should be a leader in FRT and the issues surrounding it precisely so that we can help establish the norms and standards that will shape this technology in the decades ahead.  Otherwise, nations and entities that share neither our values nor our constraints will happily, and aggressively, fill the void.

But this is not to say that we should blindly move forward with widespread adoption of Facial Recognition Technology without proper consideration for the risks involved.  A careful, incremental approach that appropriately balances costs and benefits is the best way forward.


Our government already has embarked on this path.  Consistent with the President’s Executive Order titled Maintaining American Leadership in Artificial Intelligence,[7] and the Office of Management and Budget’s proposed, first-of-their-kind principles regarding the regulation and oversight of Artificial Intelligence (AI) applications developed and deployed outside of the federal government,[8] our Nation’s strategy on achieving leadership in AI technologies can be realized only by ensuring public engagement, limiting regulatory overreach, and promoting trustworthy technology.

At the Department of Justice, five core principles guide our approach to Facial Recognition Technology.  These non-exhaustive principles help ensure that we appropriately and responsibly implement FRT specifically — and AI technologies more broadly — consistent with our obligations to protect privacy, civil rights, and civil liberties.

First, the Department will develop and use Facial Recognition Technology only pursuant to, and in accordance with, constitutional protections, applicable federal laws, and Department policy.  When considering the U.S. government’s use of facial recognition, it is important to note the significant requirements imposed by existing laws and policies, particularly with regard to the protection of individual privacy.  These laws and policies set us apart from other entities that are struggling with the implementation of FRT, and are an essential part of any analysis of the costs and benefits of its use by federal law enforcement.

For instance, the Privacy Act of 1974, as amended,[9] regulates the collection, use, maintenance, and dissemination of personal information by federal executive branch agencies.  “Broadly stated, the purpose of the Privacy Act is to balance the government’s need to maintain information about individuals with the rights of individuals to be protected against unwarranted invasions of their privacy stemming from federal agencies’ collection, maintenance, use, and disclosure of personal information about them.”[10]  The Privacy Act ensures transparency by requiring federal agencies to describe to the public how they secure and maintain lawfully collected biometric images (like palm prints, facial images, and iris images) for criminal, civil, and/or national security purposes.

In addition, Section 208 of the E-Government Act of 2002 requires federal agencies to conduct Privacy Impact Assessments that describe the risks and benefits of information technologies, and detail how the agencies appropriately mitigate privacy risks when using them. 

The FBI’s Facial Analysis, Comparison, and Evaluation (FACE) Services’ use of Facial Recognition Technology provides a useful example of how existing laws and policies can help strike the right balance.[11]  FACE Services employees support FBI investigators “by comparing the facial images of persons associated with open assessments and investigations against facial images available in [S]tate and federal face recognition systems.”[12]  This point bears repeating: the FBI investigator can provide FACE Services a photograph (called a “probe” photo) only of persons who are subjects of, or are relevant to, an already-open “assessment,” “preliminary investigation,” or “full investigation,” as those terms have long been defined in the publicly-available Attorney General’s Guidelines for Domestic FBI Operations.[13]  Moreover, the probe photos themselves must have been collected “pursuant to applicable legal authorities as part of an authorized [FBI] investigation.”[14]  For instance, FBI policy prohibits the submission of photos of individuals exercising rights guaranteed by the First Amendment (like lawful assembly or free exercise of religion), unless those actions are pertinent to, and within the scope of, authorized law enforcement activity.

Upon receipt of the probe photo, FACE Services employees “use[ ] face recognition software to compare the probe photo against photos contained within government systems, such as FBI databases . . . , other federal databases . . . , and [S]tate photo repositories,”[15] under the terms of an applicable Memorandum of Understanding (MOU) with each State or federal agency.  The FBI does not have direct access to these photos repositories, and a written MOU or other type of agreement must be in place with the relevant federal or State agency (such as a Division of Motor Vehicles) prior to requesting a search.

After comparison and evaluation, which includes “both automated face recognition software and manual review by a trained biometric images specialist,” FACE Services may identify photos that are likely matches to the probe photo.[16]  (In many cases, there will not be any likely matches.)  The “likely match” photos are called “candidate photos.”[17]  Candidate photos serve only as investigative leads; unlike fingerprints, the FBI’s face recognition results do not constitute positive identification of an individual.

The candidate photos are then sent to the FBI investigator, who is prohibited from relying solely upon them to conduct law enforcement action.  Instead, he or she must perform additional investigation to determine if the person in the candidate photo is the same person as in the probe photo.

All of these protections, along with advanced training requirements and robust auditing capabilities, enhance the accountability of the system.[18]  This, in turn, increases the likelihood of success stories—of which there are many.  I can publicly discuss one notable example.  In 2017, the FBI identified and arrested an MS-13 gang member and murderer who had evaded authorities for over six years, thereby earning a place on the Ten Most Wanted Fugitives list.  FACE Services played a critical role in helping agents track the fugitive killer down, as photos that he and a woman he was associated with had posted on social media yielded matches that, in turn, provided investigators with a physical location to surveil.  There, agents confronted the killer and apprehended him without incident.  Last year, he pled guilty to his crimes, and was sentenced to 25 years in federal prison.[19]

As with any other investigatory technique, the FBI’s use of Facial Recognition Technology during the course of an investigation must have a valid purpose consistent with The Attorney General’s Guidelines, and must comply with the U.S. Constitution, and with all applicable statutes, executive orders, and Department of Justice regulations and policies.  Moreover, the information technologies used by FACE Services are properly documented in publicly-available Privacy Impact Assessments, which spell out in considerable detail the privacy risks associated with FACE Services’ use of FRT, and describes the practices and controls the FBI has implemented in order to mitigate those risks so the public can benefit from the technology’s use.[20]

It bears emphasizing again that federal law enforcement will not use Facial Recognition Technology to unlawfully monitor people for their political views, or based solely on a person’s exercise of First Amendment rights.  This is expressly prohibited by the Privacy Act and by the FBI’s internal policies, as well as by a number of other laws governing the systems of records created by federal agencies.

Finally, we regularly test, evaluate, and improve the relevant policies and procedures as technology continues to evolve, and as new use cases emerge.  Notably, an audit revealed that, through December 2018, FACE Services employees had performed nearly 400,000 searches on a variety of databases.[21]  Each of these searches was made “in support of active FBI investigations,” with “no findings of civil liberties violations or evidence of system misuse.”[22]

Second, in making its Facial Recognition Technology resources available to other law enforcement agencies, the Department will insist those agencies use these resources at a similarly high standard, with appropriate safeguards.  The Department of Justice sets a high bar in its use of FRT, and we share a large number of resources in common with our law enforcement partners around the Nation.  In making our resources available to other agencies, we will require those agencies to use these resources at a similarly high level of responsibility and accountability.

The FBI’s management of its Next Generation Identification (NGI) Interstate Photo System (IPS) is a prime example.  The NGI System “serves as the FBI’s biometric identity and criminal history records system and maintains the fingerprints and associated identity information of individuals submitted to the FBI for authorized criminal justice, national security, and civil purposes.”[23]  This System features a capability by which over 43 million photos are available for facial recognition searching by law enforcement agencies around the Nation.  State, local, tribal, and federal law enforcement can submit and enroll photos of arrestees, based upon probable cause and supported by ten-print fingerprints, into the NGI IPS.[24]  These agencies can access the facial recognition search capability that the FBI provides, thereby leveraging the cutting-edge algorithm that the FBI employs — but only if they comply with policy regarding use of the system that the FBI requires of its own employees.[25]  This includes the requirement that candidate photos serve only as investigative leads, and not as a means of positive identification, as well as the rule that the FBI does not retain any of the probe photos that are searched against the NGI IPS, to ensure that “only those photos collected pursuant to a probable cause standard and positively associated with ten-print fingerprints would be available for searching.”[26]

In addition, the FBI imposes training requirements in line with national scientific guidelines before users can conduct searches on the System, and requires jurisdictions to meet rigorous technical standards before they can access it.  To date, fifteen States, the District of Columbia, and two federal agencies have the technical capability to conduct facial recognition searches on the NGI IPS.[27]

All federal law enforcement agencies, including those outside of the Department of Justice, are authorized to enroll and search photos in the NGI IPS for legally authorized purposes.  Currently, only two federal entities perform facial recognition searches on the System.  Those agencies are the FBI’s FACE Services and the U.S. Department of Homeland Security’s Customs and Border Protection (CBP) National Targeting Center (NTC).[28]  The NTC accesses the NGI IPS to conduct facial recognition searches using its screening rules to determine on an individualized basis which travelers are reasonably suspected to pose a risk to border security or public safety; who may be a terrorist or suspected terrorist; who may be inadmissible to the United States; or who may otherwise be engaged in illegal activity under federal criminal law.[29]  “As with all [NGI IPS] users, the candidate photos returned to the NTC are for lead purposes only, cannot be used for positive identification, and the NTC must perform additional research to resolve the identities of the subjects before taking any action.”[30] 

Overall, like FACE Services’ use of FRT, law enforcement use of the NGI IPS System should give the American people confidence.  An audit revealed that, from fiscal year 2017 through April 2019, authorized law enforcement users made over 150,000 facial recognition search requests of the NGI IPS repository.  “During that time, there [were] no findings of civil liberties violations or evidence of system misuse.”[31]

Third, the Department will ensure that Facial Recognition Technology is developed and used in a manner that minimizes inaccuracy and unfair biases.  Improper discrimination has no place in our society, let alone in our law enforcement function.  It is unlikely we can achieve perfection in any endeavor in which imperfect human beings play a role.  Perfection is probably unattainable even for machines; after all, “[f]acial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”[32]  But we must do everything in our power to employ mitigation techniques so that any errors or demographic differentials are minimized and addressed.  In the FRT context, this effort is already well underway throughout the U.S. government.  The National Institute of Standards and Technology (NIST), for instance, has developed a number of reports from its Face Recognition Vendor Test Program[33] that focus on issues like the accuracy of vendor-tested facial recognition models.  Notably, NIST’s work covers demographic differentials.  Its most recent report on this topic, published late last year, states:

Contemporary face recognition algorithms exhibit demographic differentials of various magnitudes . . . [F]alse positive differentials are much larger than those related to false negatives and exist broadly, across many, but not all, algorithms tested . . . Operational implementations usually employ a single face recognition algorithm.  Given algorithm-specific variation, it is incumbent upon the system owner to know their algorithm . . . Since different algorithms perform better or worse in processing images of individuals in various demographics, policy makers, face recognition system developers, and end users should be aware of these differences and use them to make decisions and to improve future performance . . .  Reporting of demographic effects often has been incomplete in academic papers and in media coverage.  In particular, accuracy is discussed without stating the quantity of interest be it false negatives, false positives or failure to enroll.  As most systems are configured with a fixed threshold, it is necessary to report both false negative and false positive rates for each demographic group at that threshold.  This is rarely done—most reports are concerned only with false negatives.[34]

We have a moral obligation to study demographic differentials in connection with Facial Recognition Technology.  We must approach this issue honestly, and with due regard for evidence-based reviews.  We in the federal government will continue to ensure that demographic differentials and inaccuracies are constantly being tested, quantified, and mitigated using an evidence-based approach.  In fact, it is precisely that approach that led to the progress reflected in the 2018 NIST Face Recognition Vendor Test, in which top algorithms experienced a failure (i.e., a false positive or a false negative match) on NIST-provided data inputs only 0.2% of the time, compared with a 4% failure rate in 2014 — an improvement by a factor of 20 in only four years.[35]  Through its partnership with NIST, the FBI in 2019 upgraded its NGI IPS algorithm; the selected vendor’s facial recognition algorithm performs at an accuracy rate that exceeds 99%.[36]  Going forward, the FBI plans, in collaboration with NIST, to test its NGI IPS facial recognition technology annually.  Much work remains to be done.  But the remarkable progress we have seen in the accuracy of FRT in just a few years is cause for optimism.

Fourth, the Department will continue to ensure human involvement in areas where technology is used in a manner that impacts fundamental rights and civil liberties.  As I’ve said, our view is that Facial Recognition Technology assists, but does not replace, the work conducted by our law enforcement investigators and national security personnel.  Accordingly, we believe that a human being should be involved before any actions are taken that could deprive a person of his or her civil rights or civil liberties, based on any outputs produced by facial recognition (or any similarly advanced) technology.  Of course, the fact that a human is involved is not enough; that person needs to be properly trained, and needs to function within a broader institutional culture where the state serves its citizens rather than the other way around.  But human involvement is a prerequisite to any law enforcement or national security use of FRT that claims to advance the cause of freedom.

Finally, the Department will prioritize the security and quality of the data it uses in connection with Facial Recognition Technology.  The data involved in FRT, especially in the law enforcement context, can be highly sensitive.  The FBI imposes strict access requirements to the relevant databases, and complies with all applicable laws and regulations concerning the storage and retention of this data.  In addition, it provides a secure transport mechanism for all the criminal history record information and biometric-related information it handles.  “Transmission hardware for [the relevant telecommunications infrastructure] is configured by FBI personnel; transmission data to and from [the FBI] is encrypted; and firewalls are mandated and in place.”[37]  The Department will prioritize securing its face image (and associated) data from unauthorized access.  State and local law enforcement partners, as well as commercial firms, should do the same.  The American people, too, should be cautious about the products they use, and with whom they share their biometric data.  Any mobile application—even a free, “fun,” seemingly harmless one that entertains you—developed in a nation that does not share our rule of law values could be a potential counterintelligence threat, based on the data that the app collects, its privacy and terms of use policies, and the legal mechanisms available to the host nation to access data within its borders.  As Americans, we must never forget that our critical AI technologies—as well as our citizens’ personal data—are under constant attack from strategic competitors, adversarial nations, and malicious non-state cyber actors.


Thank you for participating in this important discussion on the future of law enforcement’s use of Facial Recognition Technology.  There are many complex and important questions that need to be resolved as technology continues rapidly to advance around us.  At the U.S. Department of Justice, we firmly believe the best way to answer these questions is through principled action; through an honest evaluation (and constant re-evaluation) of benefits and costs; and through active engagement with impacted stakeholders.  Only then can we ensure that the manner in which the government uses technology best serves the American people.


[2] For representative U.S. Supreme Court cases, see, e.g., Katz v. United States, 389 U.S. 347 (1967) (use of electronic listening device to monitor private telephone conversations); Smith v. Maryland, 442 U.S. 735 (1979) (installation and use of pen register); United States v. Knotts, 460 U.S. 276 (1983) (monitoring of electronic beeper on public roads); United States v. Karo, 468 U.S. 705 (1984) (monitoring of electronic beeper within private residence); California v. Ciraolo, 476 U.S. 207 (1986) (aerial surveillance of private home and backyard); Kyollo v. United States, 533 U.S. 27 (2001) (use of thermal imaging device not in general use to explore interior details of home); Maryland v. King, 569 U.S. 435 (2013) (DNA swab of arrestee’s cheek for identification purposes); Riley v. California, 573 U.S. 373 (2014) (search of arrestee’s cell phone); Carpenter v. United States, 585 U.S. ___ (2018) (collection of historical cell-site location information).

[9] 5 U.S.C. § 552(a) (2018).

[11] The FBI’s FACE Services are located within the Investigative Services Support Unit of the Criminal Justice Information Services (CJIS) Division’s Biometric Services Section.  

[14] FACE Services PIA, supra note 12.

[15] Id.  Federal photo repositories include “the criminal mugshots in the FBI’s Next Generation Identification (NGI) system, the visa and passport photos maintained by the Department of State (DOS), and photos in the Department of Defense’s biometric system.  State photo repositories include drivers’ licenses, identification cards, and criminal photos maintained in Departments of Motor Vehicles (DMV) and similar [S]tate agencies.”  Federal Bureau of Investigation, “Privacy Impact Assessment for the Facial Analysis, Comparison, and Evaluation (FACE) Phase II System” [hereinafter “FACE Services Phase II PIA”], at 2, July 9, 2018, available at: https://www.fbi.gov/file-repository/pia-face-phase-2-system.pdf/view (last accessed September 12, 2020).

[16] FACE Services PIA, supra note 12.

[18] For example, FACE Services securely maintains a manual work log that contains each FRT search request, “which generally include[s] the name of the requesting FBI agent/analyst, the case number, and some biographic information related to the subject of the probe photo, such as name and date of birth.”  FACE Services Phase II PIA, supra note 15, at 2.  While the work log “documents the details of all work transactions,” it retains only the probe photo and limited biographic information about the relevant subjects.  Id.

[19] See Ryan Lucas, “How A Tip—And Facial Recognition Technology—Helped The FBI Catch A Killer,” NPR All Things Considered, Aug. 21, 2019, available at: https://www.npr.org/2019/08/21/752484720/how-a-tip-and-facial-recognition-technology-helped-the-fbi-catch-a-killer (last accessed September 12, 2020); Press Release, U.S. Attorney’s Office, District of New Jersey, “MS-13 Member Apprehended after Being Placed on FBI’s 10 Most-Wanted Fugitives List Sentenced to 25 Years in Prison,” July 31, 2019, available at: https://www.justice.gov/usao-nj/pr/ms-13-member-apprehended-after-being-placed-fbi-s-10-most-wanted-fugitives-list-sentenced (last accessed September 12, 2020).

[20] See generally FACE Services PIA, supra note 12; FACE Services Phase II PIA, supra note 15.

[28] The Department of Homeland Security’s use of Facial Recognition Technology falls generally outside of the scope of these remarks, but CBP’s and TSA’s use of FRT is detailed in a recent report published by the U.S. General Accountability Office.  See U.S. General Accountability Office, “Facial Recognition: CBP and TSA are Taking Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues,” Sept. 2020, available at: https://www.gao.gov/assets/710/709107.pdf (last accessed September 12, 2020).

[29] NGI IPS PIA, supra note 23, at 3.

[31] Del Greco Statement, supra note 21.

[35] See “History of NIJ Support for Face Recognition Technology,” supra note 1.

[36] Del Greco Statement, supra note 21.

[37] NGI IPS PIA, supra note 23, at 13.

News Network

  • Appeals Court Upholds 27 Month Prison Sentence Of Former Penn National Horse Trainer
    In Crime News
    The U.S. Attorney’s Office for the Middle District of Pennsylvania announced that on Jan. 11, 2021, the U.S. Court of Appeals for the Third Circuit affirmed both the conviction and 27-month prison sentence of Murray Rojas, age, 54, of Grantville, Pennsylvania. That sentence was imposed by Senior U.S. District Court Judge Sylvia H. Rambo on May 6, 2019, after Rojas was convicted by a jury on multiple counts of causing prescription animal drugs to become misbranded in violation of the Federal Food, Drug, and Cosmetic Act (FDCA), as well as conspiracy to commit misbranding.
    [Read More…]
  • Ohio Resident Pleads Guilty to Operating Darknet-Based Bitcoin ‘Mixer’ That Laundered Over $300 Million
    In Crime News
    An Ohio man pleaded guilty today to a money laundering conspiracy arising from his operation of Helix, a Darknet-based cryptocurrency laundering service.
    [Read More…]
  • Private Equity CEO Enters into Non-prosecution Agreement on International Tax Fraud Scheme and Agrees to Pay $139 Million, to Abandon $182 Million in Charitable Contribution Deductions, and to Cooperate with Government Investigations
    In Crime News
    Robert F. Smith, the Chairman and Chief Executive Officer of a San Francisco based private equity company, entered into a Non-Prosecution Agreement (the agreement) with the Department of Justice, for his involvement from 2000 through 2015 in an illegal scheme to conceal income and evade millions in taxes by using an offshore trust structure and offshore bank accounts, announced Principal Deputy Assistant Attorney General Richard E. Zuckerman of the Tax Division, U.S. Attorney David L. Anderson for the Northern District of California, and Chief of Internal Revenue Service (IRS) Criminal Investigation Jim Lee. In that agreement, Smith admits his involvement in the illegal scheme and agrees to cooperate with ongoing investigations and to pay back taxes and penalties in full. 
    [Read More…]
  • Cryptocurrency Fraudster Sentenced for Money Laundering and Securities Fraud in Multi-Million Dollar Investment Scheme
    In Crime News
    A Swedish man was sentenced today to 15 years in prison for securities fraud, wire fraud and money laundering charges that defrauded thousands of victims of more than $16 million.
    [Read More…]
  • Antitrust Division Economics Director of Enforcement Jeffrey Wilder at the IAM and GCR Connect SEP Summit
    In Crime News
    Thank you for inviting me to speak today, and my thanks to IAM and Global Competition Review for hosting this Summit on Standards Essential Patents. I want to take this opportunity to share my perspective on the role of antitrust in the development, implementation, and licensing of standards and standards-essential patents (SEPs).
    [Read More…]
  • Update to Secretary Pompeo’s Travel to Asia
    In Crime Control and Security News
    Morgan Ortagus, [Read More…]
  • Maximizing DOD’s Potential to Face New Fiscal Challenges and Strengthen Interagency Partnerships
    In U.S GAO News
    This speech was given by the Acting Comptroller General before the National Defense University in Washington D.C. on January 6, 2010. This speech focuses on DOD and the challenges it faces given the national government's current long term unsustainable fiscal path and ongoing U.S. commitments in Iraq and Afghanistan. DOD can take steps to better position itself for the future and maximize the use of taxpayer dollars, particularly by improving its business operations. Additionally, this speech discusses how the Department can work more collaboratively with other national security agencies, such as State and Agency for International Development (AID), to build the strong partnerships needed to adapt to the changing complexities of the national security environment. DOD is facing a number of internal fiscal pressures as it tries to support ongoing operations, rebuild readiness, and prepare for the future. To succeed in this era of fiscal constraint, new ways of thinking, constructive change, and basic reforms are essential. Everything must be on the table and subject to scrutiny. This includes the stove-piped approaches to planning and budgeting that have typically led to a mismatch between programs and budgets. It is also time to rethink and act decisively to rectify inefficient ways of doing business that undermine support for the troops on the battlefield. DOD is second to none in warfighting capabilities; but it is a very different story when it comes to issues of economy, efficiency, and accountability on the Department's business side. By taking decisive action now, DOD can avoid a range of unintended consequences and less than optimal performance in the future. The federal budget's structural imbalance affects the entire national security community, not just DOD. As difficult decisions are made about national priorities, all U.S. national security agencies will need to strike an affordable balance between investments in current missions and investments in new capabilities to meet future challenges. Given the complexities of the security environment, they will also need to build stronger partnerships and improve their capability to collaborate on solutions to address interrelated conventional and emerging threats that transcend the scope and authority of any one agency.The fiscal year 2009 deficit reached a record $1.4 trillion and the debt held by the public exceeded 50 percent of GDP, a level not seen since 1956. While the current fiscal situation has received considerable attention, the federal government faces even larger financing challenges that will persist long after the economy recovers and financial markets stabilize. Recent GAO simulations indicate that, absent major fiscal policy changes, federal debt held by the public will increase dramatically over the next several decades and could trend towards 200 percent of gross domestic product (GDP). GAO highlighted four areas for the Congress and the Administration: 1) ongoing operations will continue to require substantial amounts of resources; 2) extended military operations have taken a toll on readiness and rebuilding such a force will come with a big price tag; 3) personnel and health care costs are increasing; 4) cost growth in weapon systems programs remains a significant problem. DOD programs on GAO's High-Risk List relate to business operations, including systems and processes related to management of contracts, finances, the supply chain, and support infrastructure, in addition to weapon systems acquisition. Inefficiencies and other long-standing weaknesses in these areas lead to challenges in supporting the warfighter, billions of dollars being wasted annually, and missed opportunities to free up resources for higher priority needs. DOD components are developing detailed plans to support efforts to improve financial management in budgetary reporting and related operational processes and accountability for asset existence and completion. Based on what GAO has seen of the plan so far, GAO believes this prioritization is a reasonable approach for now. A consistent focus may increase the Department's ability to show incremental progress toward achieving auditability in the short term. In response to GAO's recommendations, the department has also put in place a process to improve standardization and comparability across components. The success of this process will depend on top management support, as well as high-quality planning and effective implementation at all levels. Opportunities for strengthening interagency collaboration include developing and implementing overarching, integrated strategies, formalizing coordination mechanisms to overcome organizational differences, developing a well-trained workforce, and sharing and integrating national security information across agencies. DOD has a real opportunity to set a new course for the future, take concrete steps to correct long-standing problems, and achieve meaningful results that can better position the Department to respond to changing economic conditions and future threats.
    [Read More…]
  • Terrorist Attacks in Baghdad
    In Crime Control and Security News
    Daniel B. Smith, Acting [Read More…]
  • United States Designates Entities and Individuals Linked to the Democratic People’s Republic of Korea’s (DPRK) Weapons Programs
    In Crime Control and Security News
    Antony J. Blinken, [Read More…]
  • Science & Tech Spotlight: Air Quality Sensors
    In U.S GAO News
    Why This Matters Air quality sensors are essential to measuring and studying pollutants that can harm public health and the environment. Technological improvements have led to smaller, more affordable sensors as well as satellite-based sensors with new capabilities. However, ensuring the quality and appropriate interpretation of sensor data can be challenging. The Technology What is it? Air quality sensors monitor gases, such as ozone, and particulate matter, which can harm human health and the environment. Federal, state, and local agencies jointly manage networks of stationary air quality monitors that make use of sensors. These monitors are expensive and require supporting infrastructure. Officials use the resulting data to decide how to address pollution or for air quality alerts, including alerts during wildfires or on days with unhealthy ozone levels. However, these networks can miss pollution at smaller scales and in rural areas. They generally do not measure air toxics—more localized pollutants that may cause cancer and chronic health effects—such as ethylene oxide and toxic metals. Two advances in sensor technologies may help close these gaps. First, newer low-cost sensors can now be deployed virtually anywhere, including on fences, cars, drones, and clothing (see fig. 1). Researchers, individuals, community groups, and private companies have started to deploy these more affordable sensors to improve their understanding of a variety of environmental and public health concerns. Second, federal agencies have for decades operated satellites with sensors that monitor air quality to understand weather patterns and inform research. Recent satellite launches deployed sensors with enhanced air monitoring capabilities, which researchers have begun to use in studies of pollution over large areas. Figure 1. There are many types of air quality sensors, including government-operated ground-level and satellite-based sensors, as well as low-cost commercially available sensors that can now be used on a variety of platforms, such as bicycles, cars, trucks, and drones. How does it work? Low-cost sensors use a variety of methods to measure air quality, including lasers to estimate the number and size of particles passing through a chamber and meters to estimate the amount of a gas passing through the sensor. The sensors generally use algorithms to convert raw data into useful measurements (see fig. 2). The algorithms may also adjust for temperature, humidity and other conditions that affect sensor measurements. Higher-quality devices can have other features that improve results, such as controlling the temperature of the air in the sensors to ensure measurements are consistent over time. Sensors can measure different aspects of air quality depending on how they are deployed. For example, stationary sensors measure pollution in one location, while mobile sensors, such as wearable sensors carried by an individual, reflect exposure at multiple locations. Satellite-based sensors generally measure energy reflected or emitted from the earth and the atmosphere to identify pollutants between the satellite and the ground. Some sensors observe one location continuously, while others observe different parts of the earth over time. Multiple sensors can be deployed in a network to track the formation, movement, and variability of pollutants and to improve the reliability of measurements. Combining data from multiple sensors can increase their usefulness, but it also increases the expertise needed to interpret the measurements, especially if data come from different types of sensors. Figure 2. A low-cost sensor pulls air in to measure pollutants and stores information for further study. How mature is it? Sensors originally developed for specific applications, such as monitoring air inside a building, are now smaller and more affordable. As a result, they can now be used in many ways to close gaps in monitoring and research. For example, local governments can use them to monitor multiple sources of air pollution affecting a community, and scientists can use wearable sensors to study the exposure of research volunteers. However, low-cost sensors have limitations. They operate with fewer quality assurance measures than government-operated sensors and vary in the quality of data they produce. It is not yet clear how newer sensors should be deployed to provide the most benefit or how the data should be interpreted. Some low-cost sensors carry out calculations using artificial intelligence algorithms that the designers cannot always explain, making it difficult to interpret varying sensor performance. Further, they typically measure common pollutants, such as ozone and particulate matter. There are hundreds of air toxics for which additional monitoring using sensors could be beneficial. However, there may be technical or other challenges that make it impractical to do so. Older satellite-based sensors typically provided infrequent and less detailed data. But newer sensors offer better data for monitoring air quality, which could help with monitoring rural areas and pollution transport, among other benefits. However, satellite-based sensor data can be difficult to interpret, especially for pollution at ground level. In addition, deployed satellite-based sensor technologies currently only measure a few pollutants, including particulate matter, ozone, sulfur dioxide, nitrogen dioxide, formaldehyde, and carbon monoxide. Opportunities Improved research on health effects. The ability to track personal exposure and highly localized pollution could improve assessments of public health risks. Expanded monitoring. More dense and widespread monitoring could help identify pollution sources and hot spots, in both urban and rural areas. Enhanced air quality management. Combined measurements from stationary, mobile, and satellite-based sensors can help officials understand and mitigate major pollution issues, such as ground-level ozone and wildfire smoke. Community engagement. Lower cost sensors open up new possibilities for community engagement and citizen science, which is when the public conducts or participates in the scientific process, such as by making observations, collecting and sharing data, and conducting experiments. Challenges Performance. Low-cost sensors have highly variable performance that is not well understood, and their algorithms may not be transparent. Low-cost sensors operated by different users or across different locations may have inconsistent measurements. Interpretation. Expertise may be needed to interpret sensor data. For example, sensors produce data in real time that may be difficult to interpret without health standards for short-term exposures. Data management. Expanded monitoring will create large amounts of data with inconsistent formatting, which will have to be stored and managed. Alignment with needs. Few of the current low-cost and satellite-based sensors measure air toxics. In addition, low-income communities, which studies show are disproportionally harmed by air pollution, may still face challenges deploying low-cost sensors. Policy Context and Questions How can policymakers leverage new opportunities for widespread monitoring, such as citizen science, while also promoting appropriate use and interpretation of data? How can data from a variety of sensors be integrated to better understand air quality issues, such as environmental justice concerns, wildfires, and persistent ozone problems? How can research and development efforts be aligned to produce sensors to monitor key pollutants that are not widely monitored, such as certain air toxics? For more information, contact Karen Howard at (202) 512-6888 or HowardK@gao.gov.
    [Read More…]
  • Veterans Community Care Program: Improvements Needed to Help Ensure Timely Access to Care
    In U.S GAO News
    In a September 2020 report, GAO found that the Department of Veterans Affairs (VA) established an appointment scheduling process for its new Veterans Community Care Program (VCCP) but did not specify allowable wait times for some key steps in the process. Further, GAO found that VA had not established an overall wait-time performance measure—that is, the maximum amount of time it should take for veterans to receive care from community providers. In 2013, GAO recommended that VA establish a wait-time measure under a prior VA community care program, and in 2018 again recommended that VA establish an achievable wait-time goal to receive care under the VCCP. VA has not implemented these recommendations. Potential Allowable Wait Time to Obtain Care through the Veterans Community Care Program Note: This figure illustrates potential allowable wait times in calendar days for eligible veterans who are referred to the Veterans Community Care Program through routine referrals (not urgent), and have VA medical center staff—Referral Coordination Team (RCT) and community care staff (CC staff)—schedule the appointments on their behalf. Given VA's lack of action over the prior 7 years in implementing wait-time measures for various community care programs, GAO believes that Congressional action is warranted requiring VA to establish such an overall measure for the VCCP. This should help to achieve timely health care for veterans. GAO found additional VCCP challenges needing VA action: (1) VA uses metrics that are remnants from the previous community care program and inconsistent with the time frames established in the VCCP scheduling process. (2) Few community providers have signed up to use the software VA intends for VA medical center (VAMC) staff and community providers to use to electronically share referral information with each other. (3) Select VAMCs faced challenges scheduling appointments in a timely manner and most did not have the full amount of community care staff VA's staffing tool recommended. In June 2019, VA implemented its new community care program, the VCCP, as required by the VA MISSION Act of 2018. This new program replaced or consolidated prior community care programs. Under the VCCP, VAMC staff are responsible for community care appointment scheduling. This statement summarizes GAO's September 2020 report. It describes for the VCCP: (1) the appointment scheduling process that VA established for veterans, (2) the metrics VA used to monitor the timeliness of appointment scheduling, (3) VA's efforts to prepare VAMC staff for appointment scheduling, and (4) VA's efforts to determine VAMC staffing needs. In performing that work, GAO reviewed VA documentation, such as guidance, referral timeliness data, and VAMC community care staffing data; conducted site visits to five VAMCs; and interviewed VA and VAMC officials. In its September 2020 report, GAO recommended that Congress consider requiring VA to establish an overall wait-time measure for the VCCP. GAO also made three recommendations to VA, including that it align its monitoring metrics with the VCCP appointment scheduling process. VA did not concur with this recommendation, but concurred with the other two. GAO maintains that all recommendations are warranted. For more information, contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov.
    [Read More…]
  • Justice Department Settles with Donut Shop Franchise to Resolve Immigration-Related Discrimination Claims
    In Crime News
    The Department of Justice announced today that it reached a settlement with SV Donuts Inc. LLC (SV Donuts), a Maryland corporation that owns two Dunkin Donuts store franchises. The settlement resolves a claim that the company discriminated against a lawful permanent resident because of his immigration status by not allowing him to choose which valid documentation to present to show his permission to work.
    [Read More…]
  • Assistant Attorney General Beth A. Williams Delivers Remarks to the National Association of Attorneys General on Responsible Encryption and Lawful Access
    In Crime News
    Good afternoon, everyone.  First, I would like to thank Amie Ely and the wonderful team at NAAG for all of their amazing work, and for hosting this event on such an important topic.  Thank you as well to everyone in the audience for taking the time to join virtually for what should be a truly interesting conversation.  Perhaps it’s fitting that we are having a discussion — via webcam — that highlights the importance of digital evidence.
    [Read More…]
  • Defense Health Care: Status of Fiscal Year 2004 Requirements for Reservists’ Benefits and Monitoring Beneficiaries’ Access to Care
    In U.S GAO News
    Since September 2001, about 360,000 reservists have been called to active duty to support the war on terrorism, conflicts in Afghanistan and Iraq, and other operations. Some reservists have been on active duty for a year or more, and the pace of reserve operations is expected to remain high for the foreseeable future. When mobilized for active duty under federal authorities, reservists are eligible to receive health care benefits through DOD's military health care system, TRICARE. When reservists are ordered to active duty for more than 30 days, their families are also eligible for health benefits. DOD supplements its military health care facilities with civilian health care providers through its triple-option TRICARE program. DOD's beneficiaries may enroll in TRICARE's Prime option and go to a network provider to receive care; without enrolling, they can see a network provider through the preferred provider option, Extra; or they may elect to use Standard, the fee-for-service option. Some beneficiaries have raised concerns about difficulties in finding civilian providers--particularly Standard, non-network providers--who will accept TRICARE beneficiaries as patients. The National Defense Authorization Act (NDAA) for Fiscal Year 2004, enacted on November 24, 2003, required the Department of Defense (DOD) to make changes in its delivery and monitoring of health benefits. In addition, the law directed us to review and report on aspects of these requirements. As agreed with the committees of jurisdiction, we are providing the status of DOD's progress in implementing five requirements--three related to health benefits for reservists and two related to monitoring beneficiaries' access to care under TRICARE Standard.In summary, DOD is in various stages of implementing the three requirements related to health care coverage for reservists. DOD has implemented the requirement extending the time reservists and their families can use TRICARE and is in the process of implementing the other two requirements. DOD has not implemented the two requirements directed at enhanced monitoring of beneficiaries' access to care under TRICARE Standard. We will report further on these requirements as DOD makes progress.
    [Read More…]
  • Deputy Secretary Sherman’s Meeting with Israeli Deputy Minister of Foreign Affairs Roll
    In Crime Control and Security News
    Office of the [Read More…]
  • Tiny Asteroid Buzzes by Earth – the Closest Flyby on Record
    In Space
    An SUV-size space rock [Read More…]
  • Jury convicts Houstonian in human smuggling conspiracy
    In Justice News
    A federal jury has [Read More…]
  • Affirming the Conviction of Former Bosnian Serb Army Commander Ratko Mladic for Genocide, Crimes Against Humanity, and War Crimes
    In Crime Control and Security News
    Antony J. Blinken, [Read More…]
  • Proposed NASA Mission Would Visit Neptune’s Curious Moon Triton
    In Space
    One of four concepts [Read More…]
  • United States Reaches Settlement with Federal Way Public Schools to Resolve Student Complaints of Harassment on the Basis of Religion and National Origin
    In Crime News
    Today the Justice Department’s Civil Rights Division and the U.S. Attorney’s Office for the Western District of Washington announced a settlement agreement with Federal Way Public Schools in Washington to resolve an investigation into allegations of peer-on-peer harassment on the basis of religion and national origin.
    [Read More…]


Network News © 2005 Area.Control.Network™ All rights reserved.