Note: This is a project I completed for Purdue’s AI Management and Policy Master’s Program . The data and business scenarios are part of a simulated case study and do not represent the actual operations of any real-world entity.
Executive Summary
This report discusses the historical evolution, current trends, and future risks of AI and automation in the banking sector, specifically focusing on fraud risk management. Historically, financial institutions relied on static, rule-based systems, but the increasing volume and complexity of digital transactions have necessitated a shift toward advanced machine learning and deep learning architectures. Currently, the industry faces an “arms race” where fraudsters utilize generative AI and AI agents to facilitate sophisticated, large-scale operations. To counter these evolving threats, financial institutions are increasingly integrating these same underlying technologies into their fraud programs. Future growth will be driven by “Agentic AI” frameworks that automate time-consuming manual workflows, such as drafting Suspicious Activity Reports (SARs), thereby reallocating human expertise toward high-level strategic remediation. However, widespread adoption faces significant challenges, including high implementation costs, the displacement of entry-level workforce roles, and ethical liabilities regarding algorithmic bias. Ultimately, success depends on establishing robust model governance and continuous employee reskilling to maintain resilient, adaptive defenses against automated financial crimes.
Historical Overview of Trends
For decades, banks and financial institutions have sought technologies to minimize financial losses, reputational damage, and administrative costs associated with fraudulent activities. Automated, computer-based solutions are necessary due to the high volume of digital transactions and the need for real-time response capabilities (Ali et al., 2022; Compagnino et al., 2025).
- 1980s: Banks implemented rules-based systems using static “if-then” logic. For example, a bank might automatically flag any transfer over a certain dollar amount. However, these systems struggled with complex, non-linear fraud patterns.
- Late 2000s: Institutions began integrating machine learning models, such as Random Forests and Support Vector Machines, trained on extensive historical databases to identify sophisticated anomalies (Dastidar et al., 2024).
- Late 2010s: The adoption of “deep” machine learning architectures using artificial neural networks significantly furthered accuracy gains (Mienye & Jere, 2024).
Current and Emerging Developments
The emergence of generative AI, large language models (LLMs), and autonomous agents has introduced both new risks and opportunities. As of 2026, a staggering 75% of fraud and compliance decision-makers have reported a direct increase in AI-driven fraud attacks over the past year (Veriff, 2026).
Fraudsters apply generative models to create realistic synthetic identities and deepfakes to bypass biometric authentication (Moharrak & Mogaji, 2025). To counter these threats, financial institutions are employing:
- Graph Neural Networks (GNNs): To analyze relationships across identities and devices to spot synthetic fraud (Rahmati, 2025).
- Behavioral Biometrics: Analyzing user interaction patterns like typing cadence and touchscreen pressure (Buriro et al., 2017).
- AI Agents: Orchestrating rapid response workflows to collect data and recommend intervention actions (Joseph, 2024; Rawal et al., 2025).
Future Opportunities for Growth in AI Adoption
Future growth will be driven by agentic AI architectures. While AI has improved detection, processes like preparing regulatory reports remain manual. AI agents, capable of reasoning and tool-use, are developing the capacity to automate these high-volume tasks (Okpala et al., 2025).
Key Impact: Automating the drafting of legally required Suspicious Activity Reports (SARs) significantly reduces administrative burdens, allowing human experts to focus on high-level strategic activities and in-depth investigations (Lagasio et al., 2025).
By embracing agentic AI, banks can maintain robust defenses that effectively mitigate risks posed by automated financial crimes (Ahmed et al., 2025).
Costs and Risks Associated with AI Adoption
While AI systems promise efficiency, they introduce distinct challenges:
- Investment Costs: Substantial upfront and ongoing costs can exceed savings if projects are not appropriately scoped (Ahmed et al., 2025).
- Workforce Displacement: Automation will likely reduce demand for entry-level analyst positions, complicating the pipeline for training strategic leaders (Huang, 2025).
- Ethical and Legal Liabilities: Complex models may harbor algorithmic biases.
Demographic Disparities in AI Biometrics (2025/2026 Data): Recent testing by the National Physical Laboratory and industry reports highlight significant accuracy gaps in the biometric systems banks rely on for identity verification.
- False Positive Rates (FPR): White subjects typically see an FPR of approximately 0.04%, whereas Asian subjects face 4.0% and Black subjects face 5.5%.
- Intersectionality: The disparity is most severe for Black women, with error rates reaching as high as 9.9% (The Guardian, 2025).
- Legacy Bias: Older studies like “Gender Shades” (2018) noted even wider gaps, with error rates of 0.8% for light-skinned men versus 34.7% for darker-skinned women, highlighting the persistent nature of this risk (Buolamwini & Gebru, 2018).
Conclusion
To meet the challenge of increasingly sophisticated AI-driven fraud, banks must embrace artificial neural networks, generative models, and agentic workflows while carefully managing risks:
- Iterative Deployment: Initiate adoption through narrow pilots with “human-in-the-loop” checkpoints.
- Employee Reskilling: Pair automation with education programs focused on model oversight and AI tooling proficiency.
- Robust Governance: Establish comprehensive model governance, including bias auditing and explainability tooling, to satisfy regulatory requirements (Lagasio et al., 2025).
References
- Al Ahmed, Y. et al. (2025). From Transaction to Transformation: AI and Machine Learning in FinTech. 2025 5th Intelligent Cybersecurity Conference (ICSC), 187–196.
- Ali, A. et al. (2022). Financial Fraud Detection Based on Machine Learning: A Systematic Literature Review. Applied Sciences, 12(19).
- Barlas, Y. et al. (2020). DAKOTA: Continuous Authentication with Behavioral Biometrics in a Mobile Banking Application. 2020 5th International Conference on Computer Science and Engineering (UBMK), 1–6.
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
- Buriro, A. et al. (2017). Evaluation of Motion-Based Touch-Typing Biometrics for Online Banking. 2017 International Conference of the Biometrics Special Interest Group (BIOSIG), 1–5.
- Compagnino, A. A. et al. (2025). An introduction to machine learning methods for fraud detection. Applied Sciences, 15(21).
- Dastidar, K. G. et al. (2024). Machine Learning Methods for Credit Card Fraud Detection: A Survey. IEEE Access, 12, 158939–158965.
- Ghosh, S., & Reilly, D. L. (1994). Credit card fraud detection with a neural-network. HICSS, 621–630.
- Golec, M., & AlabdulJalil, M. (2025). Interpretable LLMs for credit risk: A systematic review and taxonomy. arXiv:2506.04290.
- Huang, K. (Ed.). (2025). Agentic AI: Theories and practices. Springer Nature Switzerland.
- Joseph, S. (2024). Generative AI in Financial Fraud Detection. SSRN Scholarly Paper No. 5036833.
- Kadyshevitch, D. (2024). Generative AI has democratised fraud and cybercrime. Computer Fraud & Security.
- Lagasio, V. et al. (2025). Integrating generative AI and large language models in financial sector risk management. Risk Management Magazine, 20(1), 30–48.
- Mienye, I. D., & Jere, N. (2024). Deep learning for credit card fraud detection: A review. IEEE Access.
- Moharrak, M., & Mogaji, E. (2025). Generative AI in banking: Empirical insights. International Journal of Bank Marketing, 43(4), 871–896.
- Okpala, I. et al. (2025). Agentic AI Systems Applied to tasks in Financial Services. arXiv:2502.05439.
- Rahmati, M. (2025). Real-time financial fraud detection using adaptive graph neural networks. International Journal of Management and Data Analytics.
- Rawal, R. et al. (2025). The Significance of Generative AI in Enhancing Fraud Detection. Generative Artificial Intelligence in Finance, 159–173.
- Veriff. (2026). Fraud Industry Pulse Report 2026.
- The Guardian. (2025). Home Office admits facial recognition tech issue with black and Asian subjects. Dec 5, 2025.