Written By
Next content

Read more

Blog

Pay-as-you-grow? Why pension systems and public venture capital are two sides of the same coin in the EU

Should government venture capital (GVC) disappear once the industry matures? This question resurfaced in the final panel of the Florence School of Banking and Finance Annual Conference 2026 on the Savings and Investment Union...

Artificial intelligence (AI) systems are now deeply embedded in core security functions of finance, to the point that they no longer merely ‘assist’ human operators. Instead, they structurally determine which transactions are flagged, which accounts are frozen or subject to enhanced due diligence and which customers are classified as high risk (Buckley et al, 2021). Thus, as AI is moving closer to the locus of decision-making itself, concerns about opacity, explainability and bias, and the ability of affected individuals to contest outcomes become more acute, turning technical design choices into questions of institutional power and legitimacy (Fritz-Morgenthal et al., 2022).

The prevailing discourse on AI in financial security is rich in technical detail but comparatively thin in normative reflection. Academic studies and industry white papers largely emphasise improvements in detection accuracy, reductions in false positives and overall operational efficiency, often supported by rigorous benchmarking and performance metrics (Mazumder, 2025). These measurable gains from AI deployment are indeed significant, especially for institutions facing increasing transaction volumes and regulatory expectations. However, they are rarely matched by equally systematic analysis of accountability, procedural fairness or implications for fundamental rights such as privacy, non‑discrimination and due process (Ridzuan, 2024). Regulatory frameworks typically foreground ‘human oversight’ and ‘governance,’ but in such a context these criteria risk becoming checklist requirements to ensure that systems remain controllable rather than mechanisms for allocating responsibility and enabling contestation. As a result, the rights implications of AI‑driven financial security remain under‑theorised and under‑specified, especially when compared to the sophistication of performance‑oriented discussions.

This striking gap in both research and practice calls for a reimagination of governance going beyond minimal human intervention toward institutionalised accountability structures that embed normative commitments in organisational practice. Current governance practices are drifting toward what seems to be a more performative approach: visible oversight structures that preserve the appearance of control, while leaving accountability for AI-driven financial security decisions largely indeterminate (Buckley et al., 2021). Therefore, there needs to be a stronger focus on emerging governance proposals centred on the following principles:

  • meaningful human oversight with clearly allocated responsibility for AI‑driven outcomes;
  • operational transparency and explainability calibrated for regulators, and where possible affected users;
  • safeguards against discrimination and bias, especially in credit scoring and DeFi‑related applications, in which opaque models can amplify existing inequities;
  • robust auditability and traceability with algorithmic logs, model‑risk documentation and continuous monitoring, and
  • structured experimentation, such as regulatory sandboxes and controlled pilots, to test new models and oversight arrangements with supervisory scrutiny.

This outlook shows that AI financial security cannot be adequately characterised as a purely technical or operational matter. As algorithmic systems increasingly shape economically consequential outcomes, questions of governance must integrate efficiency objectives with robust protections of rights and accountability (Ridzuan, 2024). Systematic examination of concrete AI applications in financial settings can show how proposed governance principles operate under pressure, where they succeed and where they generate new forms of opacity or exclusion. The central issue, then, is not simply whether AI improves detection rates or reduces operational costs but whether its deployment ultimately strengthens or weakens the institutional legitimacy of financial governance.

 

References

Buckley, R. P., Zetzsche, D. A., Arner, D. W. & Tang, B. W. (2021). Regulating Artificial Intelligence in Finance: Putting the Human in the Loop. The Sydney Law Review, 43(1), 43-81.

Fritz-Morgenthal, S., Hein, B. & Papenbrock, J. (2022). Financial Risk Management and Explainable, Trustworthy, Responsible AI. Frontiers in Artificial Intelligence, 5(1).

Mazumder, P. T. (2025). AI- Driven Anti-Money Laundering Systems for Cybersecurity Resilience in U.S. Financial Infrastructure: A Framework for Real-Time Threat Detection, Regulatory Compliance and National Security. International Journal of Humanities and Information Technology, 7(03), 90-97.

Ridzuan, N. N., Masri, M., Anshari, M., Fitriyani, N. L. & Syafrudin, M. (2024). AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information, 15(8), 432.

Back to top