Back to Blog
5/31/2023
-
XX
Minute Read

AI & Cybersecurity: A Comprehensive Overview of ChatGPT Security Concerns

Michael Osakwe
Sr. Content Marketing Manager

Like all technologies, ChatGPT is dual use in nature meaning it can be used for good and bad. Learn how to use ChatGPT safely and securely.

In this article

Since ChatGPT launched in November 2022, the internet has been abuzz discussing its implications for business, the economy, and the future of humanity. The launch of GPT4 in March, and the release of ChatGPT plugins in May has only intensified this discussion, with the world still trying to make sense of security risks associated with the platform and other generative AI tools in order to navigate the technology’s upsides without falling victim to its hazards.

This post will serve as an overview of artificial intelligence technologies like ChatGPT as well their security implications. We’ll cover security issues that are relevant to both individuals and cybersecurity professionals, in order to help cut through the noise and identify how both users and sensitive data can stay safe.

What are generative AI chatbots like ChatGPT?

Generative AI refers to machine learning algorithms that allow for computers to understand text, speech, and images. Generative AI can use this broad understanding to respond to requests to modify and create new types of content. There are different types of generative AI tools, like ones that create images or music from text; however, the most popular are AI chatbots like ChatGPT which use text inputs to create responses in the form of words, graphics, and code. ChatGPT refers to a specific AI chatbot produced by OpenAI that is built on the organization’s GPT large language model (Generative Pre-Trained Transformer). As of 2023, ChatGPT leverages OpenAI’s GPT 3.5 and GPT 4 machine learning large language models (often abbreviated as LLMs).

GPT is so named because the underlying model behind the technology is called a transformer, which is an algorithm designed to take a variety of inputs and “transform” them into smaller representations (“tokens”) that the model can make sense of and manipulate in order to respond to requests. Like OpenAI’s GPT model, many of today’s AI systems are built on transformers. You might even see specific open source projects, like GPT4all or GPT-J that share a name with GPT. It’s worth noting that these particular models aren’t affiliated with OpenAI, but are a type of “generative pre-trained transformer” because use of transformers is so widespread.

What makes specific chatbots, like ChatGPT so powerful is the training that goes into these systems. Models are fed enormous amounts of data—pretty much most of the data on the internet. They are also run on large clusters of computer hardware to process this vast amount of information. Finally, techniques like reinforcement learning with human feedback help train the model by letting it learn from human evaluators.

AI chatbots can be roughly segmented into three categories:

  • Chatbots made with state of the art foundation models owned by industry leaders like OpenAI, Anthropic, and Google. These chatbots are built on models that tend to be set apart by their size, measured in “parameters.” This defines the weights that mold the model’s behavior. GTP 3.5, for example, has 175 billion parameters.
  • Chatbots that leverage some proprietary LLM in their products—often one of the foundation models mentioned above. Examples of these include Jasper AI and Microsoft’s Bing which use OpenAI’s GPT models.
  • Chatbots built on open source models like GPT-J, GPT4All, and Facebook’s LLaMA. While there are individuals and organizations building chatbots with these models, a typical use case involves a technical end user self-hosting one of these models on their local network or machine and training it for the specific purposes they have in mind. The security implications of this, of course, tend to differ from those of ChatGPT and are mostly outside the scope of this post.

{{ promo }}

Common ChatGPT security risks

Security concerns around chatbots like ChatGPT can be divided into two categories:

  • Issues and risks that impact end users of the ChatGPT platform. This can include not just individuals who actively use ChatGPT, but effectively anyone whose sensitive information is shared with ChatGPT, including clients of end users or companies whose employees are using the application as well.
  • Issues and risks that stem from threat actors’ use of ChatGPT and other generative AI tools in order to conduct cyberattacks, scams, or misinformation campaigns.

What are security recommendations for ChatGPT users?

If you’re a user of AI chatbots, like ChatGPT, or you’re an individual or organization who is a stakeholder of a ChatGPT user, you you should be aware of the following security risks:

ChatGPT can expose source code, sensitive data, & confidential information you share with it

Because the underlying models driving chatbots like ChatGPT grow by incorporating more information into their dataset, there is a real risk of sensitive data provided to ChatGPT becoming queryable or unintentionally exposed to other users. Companies like Amazon and Samsung have experienced this risk first hand. Our own research found that over the first five months of ChatGPT’s adoption, corporate employees were triggering literally thousands of “data egress” events weekly, sharing everything from confidential data to source code and client data.

Best practices for mitigating this risk: If you’re an individual user of ChatGPT, you’ll want to simply be mindful of what you share with the application. For companies who manage users of ChatGPT, while blocking the technology outright is impossible, you’ll want to put into place controls, such as Cyberhaven for AI, which monitors for data being sent to domains for services like ChatGPT, regardless of what endpoint they use, and can enforce policies that block the sharing of sensitive data.

OpenAI can be targeted by hackers or ChatGPT can have vulnerabilities

In March, ChatGPT experienced an outage. It was later confirmed that this outage took place in order to address a bug in a dependency called Redis. This bug created a vulnerability that allowed some users to potentially see other users’ chat titles, chat history, and payment information. The incident was relatively minor, all things considered, but it highlights that ChatGPT could easily become part of its users’ attack surface were a deliberate data breach to occur.

In addition to traditional vulnerabilities and hacks, AI systems are uniquely susceptible to new types of attacks. Techniques like prompt injection and model poisoning could alter an AI chatbot’s security and behavior without a user knowing.

Best practices for mitigating this risk: In addition to taking into account the practices listed above, users should adopt important cybersecurity best practices like using a unique and complex password for their OpenAI user account or using a single sign on account to access the service. Additionally, users might want to periodically delete their chat history or copy and store it offline so that it isn’t visible if their accounts get hijacked.

ChatGPT plugins can introduce new sources of data exposure risk

As of May, ChatGPT Plus users have access to a plugin store—a small but growing list of third-party integrations that’s reminiscent of the earliest days of Apple’s App Store. While plugins are reviewed by OpenAI to ensure compliance with the organization’s content, brand, and security policies, users of plugins are still sending their data to third-parties independent of OpenAI, who have their own data processing policies. Additionally, plugins can in theory introduce new vulnerabilities or attack vectors for cybercriminals hoping to access ChatGPT end user data. As of today, one of the key concerns here is prompt injection via plugins and search results.

Best practices for mitigating this risk: If you have access to plugins and decide to use them, be deliberate about which ones you install and make sure you read the data policies for these third-party services. To mitigate prompt injection and data exposure risk, you’ll want to make sure you only have the plugins ChatGPT needs to complete a task running at a given time so that you aren’t excessively sharing data with plugins not in use. This can also prevent data being requested from services that may have your sensitive data if a prompt injection attack were to occur during your session.

What are the security risks posed by malicious actors using ChatGPT?

Much has already been said about broader societal cybersecurity risks posted by ChatGPT and other generative AI chatbots, but we’ll cover the most common concerns below.

ChatGPT can generate new types of malware and malicious code

Earlier this year, security researchers at CyberArk found it was possible to coax ChatGPT into creating malware, or malicious code that can alter its signature to hide from antivirus programs. Bypassing the content filter that prevented such a request was apparently straightforward, and requests made via ChatGPT’s API did not trigger the filter. More recently, security researchers at Check Point have purportedly discovered cybercriminals on the dark web claiming to have used ChatGPT to create malware.

ChatGPT can aid with phishing attacks and misinformation

As a large language model, one of ChatGPT’s core functionalities is writing impressive prose, be it fiction or fact; threat actors can and have easily harnessed this for malicious purposes. The social engineering potential of ChatGPT, however, is not just limited to phishing emails, though, as an entire new genre of ChatGTP-created phishing websites have sprung up overnight.

Is ChatGPT safe?

Like nearly all technologies, ChatGPT and generative AI are dual use in nature, meaning that they can be used for good and for bad. It’s extremely important that end users beware that they are responsible for ensuring that they do not overshare personal or sensitive information with OpenAI through ChatGPT, and that like the other internet services they use, they adopt good security practices, like using single sign on or using a strong, unique password to manage their ChatGPT account.

For security practitioners, it’s important to keep abreast of new developments in how both threat actors and defenders are using tools like ChatGPT. While AI models are introducing new risks, in some places they’re able to provide much needed assistance to help keep users safe.

Web page
Read our Cyberhaven for Gen AI overview
Learn more
Research
Read the ChatGPT at Work report
Download now