Kieran Garvey (CCAF/Fii), Bryan Zhang (CCAF), Innes Roberts (CCAF), Farah Abdou (CCAF), Hamidu Midlah Barrie (CCAF), Douglas Arner (CCAF), Benedicte Nolens (BISIH), Jon Frost (BIS), Miguel Segoviano (IMF), Drew Propson (World Economic Forum), Felipe Ferri de Camargo Paes (CCAF), Krishnamurthy Suresh (Fii), Alexander Apostolides (CCAF), Waleed Nadeem (CCAF), Ashis Mittal (CCAF), Lucas Elletore (CCAF), Marcos Proença (CCAF), Lukas Ryll (CCAF), Eric Duflos (CGAP), Haocong Ren (CGAP), Diego Herrera (IDB), Keith Bear (CCAF), Yue Wu (CCAF), Stanley Mutinda (CCAF), Senanu Dekpo-Adza, (CCAF), Shashank Dubey (CCAF), Andres Lehmets (CCAF), Patricia Evite (UNINA), Caroline Malcolm (Fii), Hugo Coelho (Fii), Nadia Hazeveld (CCAF, Fii), Richard Kithuka (CCAF), Nolwazi Hlophe (FSCA), Roman Rampa, Jack Lee (BISIH), Vatsala Shreeti (BIS), Kumar Rishabh (BIS), Gian Boeddu (World Bank), Henrique Chitman (IDB), Ana María Zárate Moreno (IDB), Camila Quevedo Vega (CGAP), Arisha Salman (CGAP), Nouran Youssef (AMF), Karim Mouaffak (AMF) and Bretchen Hoskins (CMF).
The 2026 Global AI in Financial Services Report provides a unique contribution towards our collective understanding of the adoption and impact of AI in the financial services sector, at the intersection of financial services providers, AI vendors, regulators as well as users and consumers. This global research initiative was conducted by the Cambridge Centre for Alternative Finance (CCAF) at Cambridge Judge Business School, University of Cambridge, in partnership with:
- Bank for International Settlements (BIS)
- International Monetary Fund (IMF)
- World Economic Forum (WEF)
- Inter-American Development Bank (IDB)
- CGAP
- Arab Monetary Fund (AMF)
And in collaboration with the Financial Innovation for Impact (Fii) and with the support of the UK Foreign, Commonwealth and Development Office (FCDO).
Highlights from the report
1
Adoption
The financial services industry is ahead of regulators in AI adoption, and fintechs are ahead of incumbents. Eighty-one per cent of surveyed financial services firms are adopting AI at some level, with 40% of industry respondents reporting advanced AI adoption (‘Scaling’ or ‘Transforming’ stages), more than double that of regulators (with only 20% reporting having advanced AI adoption).
Among surveyed industry respondents, however, only 14% currently see AI as transformational to their organisational strategy and competitive advantage, signaling a potentially significant execution and business integration gap. Within the financial services industry, fintechs lead incumbents by 47% to 30% in the adoption of advanced AI, and in reaching a transforming stage of adoption (19% versus 6%). Workforce preparedness and AI investment levels emerge as key differentiators between fintechs and incumbents. Among the 130 regulatory authorities surveyed, 48% of them reported that they are still in the ‘Exploring’ stage for AI adoption or not engaged with AI at all. Thirty-three per cent of surveyed regulators are in the ‘Piloting’ stage of AI and 18% are scaling.
GenAI and agentic AI are the clearest frontiers with lower barriers for adoption than traditional machine learning methods. AI adoption remains highest for classical machine learning (75%) and genAI (71%) (despite only gaining traction since 2022), among surveyed industry respondents.
However, agentic AI is already in active adoption among 52% of industry respondents, demonstrating rapid uptake in a relatively short period of time. Twenty-three per cent of industry respondents are at more mature stages (Scaling or Transforming) of adopting agentic AI, while 29% remain in piloting. Fintechs are again ahead of traditional financial institutions (traditional FIs) in agentic AI adoption (57% versus 45%).
Looking ahead, 81% of surveyed industry respondents state that agentic AI will be meaningfully achieved by 2030, making it the clearest growth frontier in AI technology. GenAI and agentic AI are now reported as more widely used than supervised learning, unsupervised learning, reinforcement learning, and time-series techniques, which may reflect lower engineering (adoption) barriers than newer, provider-packaged methodologies.
Fifty-three per cent of surveyed industry respondents spend under $100,000 USD annually on AI yet still report high maturity in GenAI and agentic AI. Industry respondents in emerging markets and developing economies (EMDEs) are also reporting higher levels of deployment (scaling and transforming stages) than firms in advanced economies (AEs).
Current AI deployment remains concentrated in internal operations rather than business model reinvention. Four of the top 5 financial services industry AI use cases are back-office functions. The most common use cases at pilot stage or beyond are internal: process automation (79%), data visualisation (75%), software engineering (75%) and data and knowledge management (69%).
The leading front office use case is AI-powered customer support (74%), where fintechs lead at 82% versus 67% among incumbents. Fraud detection (58%) and credit risk modelling (54%) lead among risk and compliance applications.
Overall, AI is primarily being used currently to improve execution rather than to fundamentally reconfigure business models, though 51% of more mature AI adopters are piloting or deploying new financial products powered by AI versus 28% among less mature institutions.
Cloud and foundation model choices reveal a meaningful architectural divide across stakeholder groups. Most organisations are building on external models rather than training their own models from scratch: 63% of industry and 65% of regulators use internal workflows built on external foundation models. However, many also customise or develop some AI systems in-house. At the time of the survey, OpenAI is the most-used foundation model provider across all groups (76% of industry and 48% of regulators), followed by Google (57% of industry) and Anthropic (35% of industry and 33% of vendors). DeepSeek is used by 15% of industry respondents. Twenty-four per cent of surveyed regulators do not use any foundation models.
In terms of cloud infrastructure, Amazon Web Services (AWS) leads industry (46%) and vendors (55%), while 46% of regulators report using no cloud infrastructure at all. According to our survey data, the top three cloud providers serve more than 80% of industry. Among regulators who do use cloud, Azure leads at 39%. Traditional FIs remain more reliant on on-premises or local cloud deployments (39%) than fintechs (23%), which is a finding that is consistent with the analysis of the 2020 CCAF-WEF AI report.
2
Impact
Productivity effects are already felt, but enterprise value remains harder to evidence. Positive productivity impacts brought about by AI are perceived to be highest in technology, data and product functions (79%), followed by back office and operations roles (75%) and front office roles (69%). However, 55% of industry respondents and 63% of surveyed regulators find it difficult to measure the value of AI deployment, rising to 76% among large financial institutions.
Profitability outcomes are positive but uneven, correlating with AI investment and workforce preparedness. Only 40% of respondents report increased profitability from AI, while 43% report no change. Higher spend appears strongly associated with greater impact: 62% of organisations spending more than $100,000 USD annually on AI have reached advanced maturity, and 62% of that group report increased profitability, compared with 39% among lower-spending organisations. Fintechs again outperform, with 56% reporting higher profitability versus 34% of traditional FIs.
There is meaningful convergence across industry, vendors, and regulators on several core themes. All 3 groups identify greater operational efficiency as the top expected benefit of AI by 2030 (73% of industry, 66% of vendors, and 56% of regulators). There is also broad alignment on the need for clearer regulatory guidance, ranked as a top priority by 69% of industry, 67% of vendors and 79% of regulators.
More broadly, all groups place high importance on privacy, accountability, and the need for human oversight, suggesting that while institutional perspectives differ, substantial common ground already exists on the governance frameworks needed for responsible AI deployment.
3
Challenges
Data quality, talent and legacy architecture remain the core constraints to adoption and scaling. These bottlenecks are not new: data quality and talent access were already identified as the top 2 barriers in the 2020 CCAF-WEF AI report.
Data availability and quality remain the leading pain point hindering AI adoption, cited by 66% of AI vendors, 46% of regulators, 40% of industry (of those, 49% of traditional FIs and 34% of fintechs).
Vendors also report specifically acute data-related challenges when working with their clients: 72% cite data quality and completeness, 46% legacy systems and siloed environments and 41% report data-sharing restrictions. For surveyed regulators, lack of AI training and capacity building (48%), talent (47%), technology and infrastructure (45%) were also core constraints for AI adoption in addition to data issues.
4
Risks
There is broad consensus on the top risks of AI in financial services. Data privacy and protection (cited by 65% of AI vendors, 74% of industry and 80% of regulators), and model hallucinations and unreliable outputs (cited by 67% of AI vendors, 70% of surveyed industry firms and 70% of regulators) were rated as the top 2 risks by all stakeholder groups.
Operational resilience (59% of regulators and 46% of industry), model opacity and lack of explainability (56% of regulators), loss of human oversight (55% of industry and 51% of AI vendors), adversarial AI-related cyber threats (50% of industry and 57% of regulators) and algorithmic bias and fairness (43% of vendors), also feature prominently among the top risks.
Important divergences remain in risk perception, accountability, and market expectations. Regulators are markedly more concerned than vendors about cyber and operational resilience (59% versus 32%), critical third-party risk (43% versus 23%) and consumer protection and bias (41% versus 21%). Notably, industry, especially traditional FIs, is more concerned than regulators about the loss of human oversight (60% versus 42%).
Views on accountability are also fragmented: industry (35%) and vendors (39%) most often favour a case-by-case approach, whereas regulators (38%) most often place primary responsibility on the regulated financial institution (only 18% of industry and 16% of vendors do). Vendors (24%) and industry (22%) are more open than regulators (9%) to shared accountability. Regulators are also more concerned with concentration issues than industry (43% versus 28%).
The rapid deployment of agentic AI compounds cyber vulnerabilities, rendering manual oversight increasingly ineffective. Software engineering is the financial industry’s most mature AI application (42% fully deployed, 33% in development) and is also a primary cyber risk transmission vector. Broadly, 51% of respondents cite the “loss of human oversight” as the third highest AI risk overall. In software engineering specifically, the unprecedented volume and velocity of AI-generated code make traditional manual reviews increasingly ineffective.
This structural vulnerability is compounded by external threats: 48% of respondents flag adversarial AI as a top concern, reinforced by Anthropic’s recent Mythos disclosures which point to an imminent future where next generations of AI models are set to be incredibly capable at exploiting software vulnerabilities presenting both firm-level cyber resilience as well as systemic financial risks. Further complicating this problem space is a notable perception gap: AI vendors place less priority than industry and regulators on both adversarial AI threats (35% versus 50% industry, 57% regulators) and cyber/operational resilience (32% versus 46% industry, 59% regulators). These intersecting vulnerabilities can also feed into the top perceived risk across all stakeholders – data privacy and protection (73% of respondents) as sensitive data is typically the primary target for the cyber exploits these vulnerabilities enable.
5
Regulation and supervision
Among regulators, supervision remains the dominant use case for AI, but it is also applied in licensing and policymaking. Most of the internal AI use cases for regulators relate to supervisory functions and activities such as market surveillance and misconduct detection (31% of surveyed regulators are either piloting or already deployed), anti-money laundering and counter-financing of terrorism (AML/CFT) supervision (27%) and consumer protection (25%).
However, AI is also being used in other regulatory functions including licensing and authorisation (for example, fit and proper checks, application screening and ownership structure inspection) and policy and rule-making processes (for example, horizon scanning and risk identification, and consultation analysis and drafting). The most referenced external AI frameworks are the EU AI Act (42%), financial sector-specific guidance from standard-setting bodies (41%), and the ISO/IEC AI standards (for example, ISO 42001) (27%).
Seventy-eight per cent of surveyed regulators rate explainability as critical or important to their regulatory objectives, yet 50% of industry have adopted explainable AI methods, and vendors perceive more than half of their clients as having low or no expertise in the use of explainability tools. About two-thirds of industry respondents are not monitoring for bias or arbitrary discrimination, exclusion or systemic bias in AI, and only 37% are concerned about model explainability and opacity as an operational risk.
Regulators are generally optimistic about AI’s role in achieving their objectives by 2030. Seventy-eight per cent of surveyed regulators view AI as significant or transformative for supporting their objectives by 2030, with 29% rating it as potentially transformative. Surveyed regulators also see that AI usage can have a favourable impact on supporting financial inclusion (49% supportive versus 12% that see it as challenging), fighting financial crime (42% supportive versus 18% challenging) and data sharing via open banking/finance (37% supportive versus 9% challenging).
Regulators are less sure about AI’s overall impact on consumer protection (with 27% viewing it as supportive versus 27% challenging), competition (23% supportive versus 15% challenging), financial stability (22% supportive versus 13% challenging) and technology/cyber resilience (19% supportive versus 27% challenging). On global co-operation, regulators are cautiously optimistic: 48% say co-operation is challenging today but likely to improve, while only 9% are pessimistic.
6
2030 outlook
Reskilling, not displacement, is the dominant workforce expectation for now. Ten per cent of industry respondents expect a net increase in jobs and around 25% expect significant reskilling and job transformation without large net losses. Meanwhile, 24% of industry respondents expect a net reduction in roles, more than the last 3 years (where 13% of surveyed industry respondents saw job losses, but an equal number also saw job gains).
Industry respondents suggest that commercial and wholesale banking is most likely to see net increases in jobs (44% of respondents) and payments less likely (10%). Interestingly, 58% of industry respondents state that their own organisation is likely to see a net increase or reskilling in jobs (36% expect a net increase and 22% expect reskilling and transformation).
Competition, market dynamics and consolidation are expected to shift materially by 2030. Vendors expect greater disruption to market dynamics, with 55% anticipating either winner-takes-all or fragmented market outcomes, while regulators are the most cautious, with 52% stating it is too early to tell. Expectations of competitive disruption have shifted dramatically since 2020: 42% of respondents then believed the market status quo would prevail, versus only 8% today.
Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) are expected to be meaningfully achieved by 2030 by a material number of respondents. Forty-four per cent of all respondents expect AGI to be achieved by 2030, with that expectation higher among AI vendors (51%) and industry (50%) than regulators (28%). In terms of perception of priority risks today, all 3 stakeholder groups ranked the emergence and impact of AGI near bottom of all the risks, selected by just 9% of respondents. 28% of all respondents have gone further to expect the realisation of ASI by 2030.

