15
Poorly designed and improperly secured Artificial Intelligence integrations can be misused or exploited by adversaries, to the detriment of companies and users.
Researchers at the University of Calabria showed that LLMs can be tricked into installing and executing malware on victim machines using direct prompt injection (42.1%), RAG backdoors (52.9%), and inter-agent trust exploits (82.4%). 94% state-of-the-art LLMs were vulnerable.
Link: https://arxiv.org/html/2507.06850v3