Curious about how financial institutions, their service providers, and their federal regulators are using and overseeing machine learning and other AI tools?
A new GAO report[1] published on May 19th (the “Report”) provides a unique glimpse into AI use cases and risk mitigants being employed in the financial services industry, based in part on interviews with representatives from a variety of constituents, including the following entities.
The Report also incorporates findings from government, non-government organizations, and academic studies and reports, as well as applicable law and interpretive guidance, to assess (i) the potential benefits and risks of using AI in financial services, (ii) how AI use by financial institutions is overseen by federal financial regulators, and (iii) the use of AI by the regulators themselves in conducting supervisory and market oversight activities.
Insights Regarding Current AI Use by Banks and Credit Unions
The Report shares a number of specific ways in which banks and credit unions are already using AI, particularly machine learning, to enhance their internal operations and to improve and increase the services provided to their customers. The Report also discussed, in a more limited adoption, the use of generative AI (“GenAI”) by certain financial institutions.
For example, Gen-AI use cases have included:
- Employee Questions Chatbot. An unnamed regional financial institution is piloting a GenAI chatbot for internal use to answer employee questions about policies and procedures.
- Research and Writing Assistance. An unnamed large financial institution is piloting GenAI tools to assist employees with writing code, creating customer interaction summaries, searching legal documents, and market research activities.[3]
Other AI use cases with external impact included:
- Credit Underwriting. According to an unidentified AI provider, “credit unions that implemented its AI model reported a 40 percent increase in credit approvals for women and people of color.”[4] In addition, an AI provider informed the GAO that AI-powered credit underwriting enabled credit unions to provide applicants with faster credit decisions.
- Customer Service. An unidentified credit union is personalizing its customer service operation by using AI to recommend frequent tasks, such as transferring funds, to its members that interact with the credit union’s chatbot. With respect to call centers, a banking association indicated that some banks are using customer call center conversations to train GenAI models to enhance customer support.
The Report further documents reported cost-saving benefits for financial institutions from adopting AI tools. An AI provider indicated that its AI model “reduced the time and resources needed for financial institutions to make credit decisions by up to 67 percent.”[5] The GAO also cited a study that concluded “chatbots saved approximately $0.70 per customer interaction compared with human agents.”[6] However, trade associations cautioned that the costs associated with developing and/or acquiring AI may put some tools out of reach for smaller institutions.
Financial Institutions Remain Cautious About Some AI Use Cases
While the Report highlights multiple significant benefits of adopting AI, interviewees also flagged a number of concerns:
- Hallucinations. An unnamed large bank suggested that GenAI hallucinations are a critical reason why banks may be avoiding using GenAI for purposes that require a high degree of accuracy, including credit underwriting and risk management. However, there are methods, according to the same bank, to reduce hallucination risk, including using another GenAI model to verify the first model’s output, and limiting the number of sources used to train the model.
- Bias. An unidentified consumer advocate expressed concern that AI “could steer borrowers, including those in protected classes, toward inferior or costlier credit products,”[7] although it did not appear from the report that such advocate provided any actual examples of this occurring today. The GAO also cited Congressional testimony by an AI researcher that some models may be able to infer a loan applicant’s race or gender from application data, or otherwise cause variables to interact in a manner that has a disproportionate negative impact on protected groups.
- Privacy. To mitigate concerns that AI use could result in disclosure of customer data, an unnamed financial institution said it has implemented restrictions on its employees’ ability to access and use publicly-available GenAI tools.
- Conflict of Interest. The GAO noted that AI models could potentially create conflicts of interest by, for example, prioritizing higher profits for financial advisors at the expense of investors.The Report cited an unidentified consumer advocate’s concern “that it may not be apparent when AI provides conflicted advice” that is not in the best interest of the client.[8]
Ultimately, representatives of financial institutions told the GAO that they have been “more cautious about adopting AI for activities where a high degree of reliability or explainability is important or where they are unsure how regulations would apply to a particular use of AI.”[9]
Financial Regulators Are Prioritizing AI Oversight and Adoption
Federal regulators told the GAO that existing laws, regulations, and guidance are applicable to financial institutions regardless of AI use – including model risk management guidance and third-party risk guidance issued by several of the prudential banking authorities.[10] The FDIC, Federal Reserve, OCC, and NCUA also said that examination of AI usage “would typically be reviewed as part of broader examinations of safety and soundness, information technology, or compliance with applicable laws and regulations.”[11] However, the NCUA flagged its relatively limited statutory authority to examine third-party technology service providers that provide services for credit unions, prompting the GAO to recommend that Congress consider addressing this gap through legislative action.
The Report found that some banking regulators have already been exercising supervisory oversight with respect to financial institutions’ use of AI. For example, the Federal Reserve, OCC, and CFPB said they have conducted multiple reviews of financial institutions focused on AI use, including an OCC’s “review of seven unnamed large banks from 2019 to 2023.”[12] According to the Report, the OCC’s review concluded these banks’ AI risk management practices were largely satisfactory, though their risk ratings for models did not explicitly capture AI-specific risk factors, and only limited information was available regarding efforts to mitigate AI-related bias. While the Report did not state what AI-specific risks could mean, potentially, such risks could include emergent risks associated with GenAI use, such as direct prompting attacks, backdoor poisoning attacks, jailbreaking, and training data extraction. The Report also stated that one of the CFPB’s reviews prompted the agency’s issuance of guidance regarding chatbots in 2023.
Finally, federal financial regulators themselves have been adopting AI to improve the efficiency and comprehensiveness of their oversight of regulated entities. Current use cases include:
While representatives of each regulatory agency interviewed said they are not presently using GenAI for supervisory or market oversight activities, some are considering doing so in the future. For example, the OCC indicated it intends to use GenAI to help examiners “identify relevant information in supervisory guidance and assess risk identification and monitoring in bank documents.”[13] The Federal Reserve is also considering ways to explore GenAI in supervisory activities.
With respect to AI governance and planning, multiple regulators cited AI strategy documents they either have already developed or that are in development currently. In addition, some regulators, including the Federal Reserve and NCUA, have also adopted AI-specific policies. The OCC and SEC are in the process of following suit. That said, the regulators indicated they are not using AI to make autonomous decisions or as a sole source of input for supervision.
Key Takeaways from The Report
In summary, the GAO report provides insights into AI adoption in the banking and financial services space, including GenAI adoption, and whether, in the GAO’s opinion, banking regulators are reviewing and providing adequate guidance regarding AI use and AI risk management.
- Financial institutions have begun adopting new AI tools, including GenAI capabilities to a more limited extent, to enhance their internal and external-facing operations – though they remain concerned about accuracy, privacy, bias, and other regulatory risks.
- Banking regulators are overseeing AI use by financial institutions by using existing laws, regulations, guidance, and frameworks such as model risk management, and have taken steps to zero in on AI for some supervisory activities. However, the GAO believes that the NCUA’s existing guidelines for model risk management do not adequately cover AI and do not provide sufficient guidance compared with other banking regulators and is also hampered by its lack of authority to directly examine technology providers.
- Regulators are adopting AI tools to improve their own efficiency and accuracy – though their approaches to AI adoption, and comfort with new use cases, varies from agency to agency.
[1] GAO, Artificial Intelligence, Use and Oversight in Financial Services, GAO-25-107197 (May 2025) (the “Report”)
[2] The regulatory agencies interviewed were the Board of Governors of the Federal Reserve System (“Federal Reserve”), the Office of the Comptroller of the Currency (“OCC”), the Federal Deposit Insurance Corporation (“FDIC”), the National Credit Union Administration (“NCUA”), the Securities and Exchange Commission (“SEC”), Consumer Financial Protection Bureau (“CFPB”), and the Commodity Futures Trading Commission (“CFTC”).
[3] Id. at 9.
[4] Report, supra Note 1, at 10.
[5] Id. at 11.
[6] Id. at 12.
[7] Id. at 13.
[8] Id. at 14.
[9] Id. at 9.
[10] The Report noted, however, that several industry groups and financial institutions suggested that regulators should clarify AI-related guidance, including with respect to explainability expectations and adverse action notice requirements.
[11] Report, supra Note 1, at 21.
[12] Id. at 23.
[13] Id. at 35.
[View source.]