The First Malicious MCP Server is a Warning Shot for AI Cybersecurity
The first malicious Model Context Protocol (MCP) server has been discovered and we should all be worried how this is foreshadowing AI cybersecurity risks!
Cybersecurity researchers at Koi Security detected malicious code within an MCP server that connects AI systems with Postmark email services. The code covertly copies every email and exfiltrates it back to the developer. It was created and distributed by a developer not associated with Postmark.
Although not overly serious or sophisticated, as it was only downloaded 1643 times, this is the first of what will become a pressing cybersecurity problem: MCP exploitations!
Artificial Intelligence systems need to access and leverage the capabilities of other digital systems. APIs are traditionally used for users or software to run queries or job, but those aren’t at the interactive level that AI systems require to be super helpful. This is where the Model Context Protocol (MCP) has been created to shine! MCPs enable AI systems to integrate in ways that allow for rich extensibility and cooperation. They are the bridge to make smart AI agents capable of actually executing plans versus just describing what needs to be done.
The problem is that MCP frameworks, like many tools of modern digital functionality, was not designed with robust cybersecurity principles in mind. APIs experienced the same situation years ago and were wildly popular. Cybersecurity professionals were ignored when they recommended caution as it was generally believed by developers to be inherently secure. They were wrong.
Something that is designed purely for function can operate perfectly, but still be the source of cybersecurity problems. The world figured out, with the help of hackers, that poorly designed APIs could be misused to expose data or corrupt systems, all while operating perfectly within their design parameters.
The same story will repeat itself with MCPs. The race to develop and deploy powerful Agentic AI systems will sadly overshadow any concerns for security, privacy, and safety. By the time the weaknesses are detected, usually by malicious hackers, it will be too late. This is the typical cycle of disruptive technology innovation.
Keep an eye on AI development and especially the use of MCPs. They are important, but inherently lack cybersecurity insights to protect from misuse or being hacked. Cybersecurity professionals must convince architects and developers to add in security controls or work with MCP vendors who will do it for them. Otherwise, it will simply be a matter of time before the systems and data that are connected, will be victimized.