On October 30, 2023, the White House announced that President Biden had issued an Executive Order regarding artificial intelligence (“AI”). The Executive Order was accompanied by a Fact Sheet summarizing the eight policy goals on AI that the White House wanted to emphasize: 1) creating new standards for AI safety and security; 2) bipartisan privacy protections at the Federal level; 3) ensuring AI advances equity and civil rights; 4) ensuring consumers are benefited, and not harmed, by AI; 5) ensuring workers are protected and supported as AI develops; 6) promoting innovation and competition so that AI development can occur at large and small companies; 7) advancing American leadership in AI abroad; and 8) ensuring responsible and effective use of AI by the Federal Government. The White House previously issued an AI Bill of Rights in February 2023.
The Executive Order directs executive agencies, including the Department of Treasury and the United States Department of Housing and Urban Development (“HUD”), to undertake a variety of actions to operationalize aspects of the Executive Order’s broad policy goals. In addition, the Executive Order makes recommendations to both Federal consumer protection agencies, the Federal Trade Commission (“FTC”) and the Consumer Financial Protection Bureau (“CFPB”), to take aligned action. Because both the FTC and the CFPB are independent regulatory agencies that are not part of the Executive Branch, the White House is constrained only to making recommendations.
While most of the Executive Order dealt with technology, workforce and social concerns raised by AI developments, there were specific directives regarding financial services. Specifically:
This summary of directives to the Department of the Treasury and HUD (and encouragements to the CFPB and the FTC) in the Executive Order directly impacts the financial services industry, but there are other aspects of the Executive Order that will necessarily affect financial services, as well. For example, the Executive Order also seeks to address risks posed by synthetic content (i.e., the use of AI to generate deep-fake photographs, voice recordings and video recordings), instructing the Secretary of Commerce to work with other agencies to develop “science-backed standards and techniques for 1) authenticating content and tracking its provenance; 2) labeling synthetic content, such as using watermarking; 3) detecting synthetic content; . . . 4) testing software used for the above purposes; and 5) auditing and maintaining synthetic content.” Ever vigilant regarding phishing and other types of fraudulent attempts that trick customers into accessing their online accounts or even sending funds from their accounts, synthetic content issues are bound to become an increasing point of focus for financial services fraud teams.
Highlighting the risks of synthetic content generally, Vice President Kamala Harris noted in remarks that she gave at the U.S. Embassy in London regarding the Future of Artificial Intelligence on November 1, “when people around the world cannot discern fact from fiction because of a flood of AI-enabled mis-and disinformation . . . is that not existential for democracy?” In a Fact Sheet accompanying Vice President Harris’ speech in London, it was announced that the White House had voluntary commitments from 15 leading AI companies to develop mechanisms dealing with synthetic content, but also recognized that all nations must “support the development and implementation of international standards to enable the public to effectively identify and trace authentic” digital content and to distinguish it from “harmful synthetic AI-generated or manipulated” content.