OpenAI set off an arms race and our security is the casualty

Ay9c...NusT
11 Apr 2024
6

Since ChatGPT launched in late 2022 and made artificial intelligence (AI) mainstream, everyone has been trying to ride the AI wave — tech and non-tech companies, incumbents and start-ups — flooding the market with all sorts of AI assistants and trying to get our attention with the next “flashy” application or upgrade. 
With the promise from tech leaders that AI will do everything and be everything for us, AI assistants have become our business and marriage consultants, our advisors, therapists, companions, confidants — listening as we share our business or personal information and other private secrets and thoughts.
The providers of these AI-powered services are aware of the sensitivity of these discussions and assure us that they are taking active measures to protect our information from being exposed. Are we really being protected?


AI assistants — friend or foe?

Research published in March by researchers at the University of Ber-Gurion showed that our secrets can be exposed. The researchers devised an attack that deciphers AI assistant responses with surprising accuracy, despite their encryption. The technique exploits a vulnerability in the system design of all major platforms, including Microsoft’s Copilot and OpenAI’s ChatGPT-4, except for Google’s Gemini.

Furthermore, the researchers showed that once the attacker built a tool to decipher a conversation — for example, with ChatGPT — this tool could work on other services as well, and thus could be shared (like other hacking tools) and used across the board with no additional effort.
This is not the first research pointing to security flaws in the design and development of AI assistants. Other studies have been floating around for quite a while. In late 2023, researchers from several U.S. universities and Google DeepMind described how they could get ChatGPT to spew out memorized portions of its training data merely by prompting it to repeat certain words.
The researchers were able to extract from ChatGPT verbatim paragraphs from books and poems, URLs, unique user identifier, Bitcoin addresses, programming codes and more.Adversaries could intentionally use crafted prompts or inputs to delude the bots to generate the training data, which may include sensitive personal and professional information.
The security problems are even more acute with open-source models. A recent study showed how an attacker could compromise Hugging Face conversion service and hijack any model that submitted through the conversion service. The implications of such an attack are significant. The adversary could implant their own model instead, push malicious models to repositories or access private repositories datasets.
To put things in perspective, the researchers found that organizations such as Microsoft and Google — which combined have 905 models hosted on Hugging Face — that received changes through the conversion service, and might have been at risk of an attack and compromised.


Things can worsen

AI’s new capabilities may be alluring, but the more power one gives to AI assistants, the more vulnerable one is to an attack.
Bill Gates, writing in a blog last year, described how an overarching AI assistant (what he termed an “agent”) will have access to all our devices — personal and professional — to integrate and analyze the combined information to act as our “personal assistant.”


As Gates wrote in the blog:

An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

This is not science fiction, and it could happen sooner than we think. Project 01, an open-source ecosystem for AI devices, recently launched an AI assistant called 01 Light. "The 01 Light is a portable voice interface that controls your home computer," the company wrote on X. "It can see your screen, use your apps, and learn new skills.”
Project 01 described on X how its 01 Light assistant works. Source: X
It might be quite exciting to have such a personal AI assistant. However, if the security issues are not promptly addressed, and developers are meticulously making sure that the system and code are “clean” from all possible vulnerabilities, there is a possibility that if this agent is adversely attacked, your entire life could be hijacked — including information of any person or organization that is related to you.


Can we protect ourselves?

In late March, the U.S. House of Representatives set a strict ban on congressional staffers' use of Microsoft's Copilot.
"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," House Chief Administrative Officer Catherine Szpindor said in a statement announcing the move.
In early April, the Cyber Safety Review Board (CSRB) — which falls under the Department of Homeland Security — published a report blaming Microsoft for a "cascade of security failures" that enabled Chinese threat actors to access U.S. government officials’ emails in summer 2023. The incident was preventable and should never have happened.

As the report stated: "Microsoft has an inadequate security culture and requires an overhaul." This would most likely include security issues with Copilot.
This is not the first ban on an AI assistant. Technology companies such as Apple, Amazon, Samsung and Spotify along with financial institutions including JPMorgan Chase, Citi, Goldman Sachs and others have banned the use of AI bots for their employees.
Major technology companies including OpenAI and Microsoft pledged last year to adhere to responsible AI. Since then, no substantial actions have been taken.
Pledging is not enough. Regulators and policy makers should demand actions. In the meantime, we should refrain from sharing any sensitive personal or business information.
And maybe if we — collectively stop using these bots until substantial actions have been taken to protect us, we might have a chance to be "heard" and force companies and developers to implement the needed security measures.

Write & Read to Earn with BULB

Learn More

Enjoy this blog? Subscribe to ayito

1 Comment

B
No comments yet.
Most relevant comments are displayed, so some may have been filtered out.