94 Percent of LLMs Shown to Be Vulnerable to Attack

6Lm7...Gg5t
5 Dec 2025
58


The unfortunate truth is that poorly designed and improperly secured Artificial Intelligence integrations can be misused or exploited by adversaries, to the detriment of companies and users. Some of the compromises will bypass the traditional cybersecurity and privacy controls, leaving victims very exposed.

Researchers at the University of Calabria demonstrated that LLMs can be tricked into installing and executing malware on victim machines using direct prompt injection (42.1%), RAG backdoor attacks (52.9%), and inter-agent trust exploitation (82.4%). Overall, 16 of 17 (94%) state-of-the-art LLMs were shown to be vulnerable.

We cannot afford to be distracted by dazzling AI functionality when we are inadvertently putting our security, privacy, and safety at risk. Let’s embrace AI, but in trustworthy ways.

The full University of Calabria Research Paper can be found here: https://arxiv.org/html/2507.06850v3
I highly recommend reading what they tested and which system passed. The results are concerning!

BULB: The Future of Social Media in Web3

Learn more

Enjoy this blog? Subscribe to MRosenquist

0 Comments