Written By
Next content

Read more

News

Reinforcing the Eurozone safety net requires a stronger ESM mandate

A conversation with ESM Chief Economist and Management Board Member, Mr Rolf Strauch This conversation was conducted by our Research Associate, Leopoldo Pérez Obregón, and recorded on 27 June 2024 at the EUI premises...

As artificial intelligence (AI) and automation continue to reshape industries, particularly finance and banking due to developments arising in generative AI (GenAI), the focus is increasingly shifting to ensuring their secure deployment and addressing the threats posed by sophisticated cyberattacks. This growing attention by regulators and authorities to the secure use of AI is demonstrated in the recently adopted EU AI Act. However, further steps are required to develop an efficient AI security framework, including establishing security standards and creating regulatory sandboxes to test them. As a starting point this approach could undoubtedly benefit from an analysis of the current threat landscape and risk scenarios.

The need for these measures is highlighted by the growing risks of cyberattacks on AI systems, especially in sensitive sectors like finance and central banking. Cybersecurity reports consistently emphasise the dual role of AI, both as a tool to enhance security and as a target for sophisticated threats. Reviewing these reports provides clear understanding of the current threat landscape, underscoring the importance of robust harmonised security practices.

To understand the scope and dimension of the AI security phenomenon, it is fundamental to look at the data revealed by documents such as the IBM 2024 Cost of a Data Breach Report, which provides a comprehensive analysis of the global financial and operational impacts of data breaches. This report highlights a key finding: organisations that leverage AI and automation in their security protocols tend to experience lower average costs associated with breaches. In other words, by automating threat detection and response, organisations can mitigate the scope and impact of attacks more effectively, highlighting the potential for AI to enhance cybersecurity defences.

However, while AI serves as a valuable defence tool, it is equally important to acknowledge its potential to be exploited by attackers. The Verizon 2024 Data Breach Investigation Report notes an increase in the use of AI tools in data breach attacks, although it emphasises that this trend is not yet as widespread or alarming as some might fear. Nevertheless, the increasing integration of AI in attack strategies suggests a growing need for proactive defence. As AI-driven attacks become more sophisticated, security measures must evolve accordingly.

Further supporting this concern, the ENISA 2024 Threat Landscape Report illustrates in detail how AI is being leveraged in cyberattacks. A notable example is the rise of AI chatbots, such as OpenAI’s ChatGPT and Google’s Bard, which have become both targets and tools for cybercriminals. These systems are vulnerable to tactics such as prompt injection and data poisoning, in which malicious actors manipulate training datasets to corrupt the outputs of AI. Other relevant cases include the use by attackers of AI-generated deep fakes to enhance social engineering campaigns and the production of artificial inauthentic content to conduct influence and information manipulation operations.

This dual role – AI as a tool for both defenders and attackers – highlights the growing complexity of the involvement of AI in cybersecurity and the need for advanced security measures to combat threats.

As insights in these reports show, a clear understanding of the current trends and data in the security dimension could be extremely beneficial to develop regulatory instruments containing harmonised tools (in the form of frameworks, standards or sandboxes) that would ensure AI technologies remain secure and resilient in practice.

Back to top