Select Page

AI companion or cybersecurity nightmare? Is ChatGPT safe to use?

CYBSAFE-SebDB Webinar-preblog-221011MS-36

1 July 2023

Is ChatGPT safe to use? The cybersecurity risks everyone needs to know about

Today we’re going to address 2023’s robotic elephant in the room.

No doubt, artificial intelligence (OpenAI, anyone?) is mind-blowing. It can do some amazing stuff. At CybSafe, we’re big fans of AI and the potential it holds.

But, as always, with great power comes great responsibility. And, when it comes to ChatGPT, well, let’s just say there’s a lot that can go wrong if you’re not careful.

Case in point, the (unsurprisingly widely publicized) ChatGPT outage that happened on March 20 of this year, which exposed users interactions, names and even “payment-related information”. So… yeh. It’s not exactly what you’d describe as ‘safe’ yet. Not by a long shot. 

So, let’s dive into the murky waters of cybersecurity risks that come with using ChatGPT. From stolen identities to leaked trade secrets, we’ll take a look at some real-life examples of how using ChatGPT can put you and your organization at risk. 

And, of course, we’ll give you some pro tips on how to keep your AI companion in check. If you want to stay on top of new security risks, check out our 2023 Security Awareness Predictions Report.

2023 security awareness predictions

Enough preamble. On to the cybersecurity risks of ChatGPT. 

Spoiler: there are lots

What went down?

Well, ChatGPT did. Or rather it was taken down on March 20, following the discovery of a bug in an open-source library. 

The company said the bug “may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window.”

Before it was taken offline, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits of a credit card number, and credit card expiration date. Ouch.

Impacted customers were notified their payment information may have been exposed. OpenAI said data could have been accessed in two ways “during a specific nine-hour window”. Yep. Nine… hours. Ouch again.

OpenAI CEO Sam Altman was suitably groveling in his apology, but the fact remains that it happened, and it was quite a big deal. 

Such a big deal, in fact, that Italy’s privacy watchdog then went on to ban ChatGPT, after raising concerns about the data breach and the legal basis for using personal data to train it.

ChatGPT is by no means alone here–lest we forget that both Google Bard and Bing Chat are hot on its heels, and this isn’t a ChatGPT issue, it’s a wider AI issue. Just ask Elon Musk, Apple cofounder Steve Wozniak and over 1,100 signatories, who put their names on an open letter calling for 6-month ban on creating powerful AI.

If that doesn’t fill you with confidence, we’re not sure what will. Gulp.

But that “minor” hiccup and open-letter-that-made-news-all-over-the-world aside, how safe to use is ChatGPT in the real world? We’re going to use some hypothetical examples to take you through it. Buckle up.

Financial reporting

Accountant Dave wanted to use ChatGPT for financial reporting. He was a busy guy and he thought it would save him time and effort. 

But what Dave didn’t consider is how he was giving ChatGPT access his confidential financial information. And it wasn’t long before it fell into the wrong hands. If Dave’s financial data had been exposed, the organization’s reputation would have taken a massive hit.

Yay: ChatGPT can ace your financial reporting

The world of financial reporting is super complex and super time-consuming. That’s why ChatGPT promises to assist, providing in-depth analysis and reporting in a fraction of the time.

Yikes: ChatGPT can access confidential financial deets

But, with all that confidential financial information being processed by AI, there comes the risk of data breaches and hacks. If ChatGPT falls into the wrong hands, your confidential financial information could be exposed, and that could be disastrous for your organization. 

When it comes to financial reporting, we say it’s best to limit access to ChatGPT. Better safe than sorry.

Need to know more on this? We’ve got your back. Because (joyously) security awareness training isn’t just for the internet, it covers modern tech like AI, too. 

different types of security awareness training blog image feature

Copy editing

Meet Juanita, a content writer who was always on a tight deadline. She relied on ChatGPT to help her with copyediting, and it did not disappoint. Her writing was clearer, more concise, and free of errors.

But little did she know that ChatGPT was also learning her writing style. It began to write content that was indistinguishable from her own, which put her job at risk.

Yay: ChatGPT can edit and refine your writing

Let’s face it, we could all use a little help with our writing from time to time. That’s where ChatGPT comes in. It can copy edit and improve your writing, making sure it’s clear, concise, and error-free.

Yikes: ChatGPT knows how to write just like you

But by analyzing your writing style, ChatGPT can create content that’s indistinguishable from your own. And if you’re not careful, that could lead to all sorts of problems, from plagiarism to identity theft.

Customer service

Meet Ahmed, a customer service agent who was thrilled with the chatbot that could answer customer queries quickly and efficiently. It helped him focus on more important tasks, and it also made customers happier with faster responses.

But if one day ChatGPT malfunctioned and spilled sensitive customer data (just like it did one March 20), Ahmed would not only be devastated, but the organization could face major litigation The incident would damage his organization’s reputation, and Ahmed’s job could be on the line.

Yay: ChatGPT can provide instant help to customers

Customer service is a tricky area to get right, and AI can pitch in with client care. Chatbots, and ChatGPT in particular, can provide instant customer service. With the ability to understand complex queries and respond in seconds, it can be a game-changer.

Yikes: ChatGPT can spill personal customer information

But, and this is a big BUT, chatbots, including ChatGPT, could also expose sensitive customer data. Imagine if ChatGPT gets hacked or accessed by someone unauthorized. All that customer data could fall into the wrong hands, leaving you with a major headache.

CV writing

Meet Malik, a young graduate who was struggling to write his CV. He was thrilled when he found ChatGPT, which wrote his CV in a snap. He felt he finally had a chance to compete with other job seekers.

ChatGPT was also collecting his personal data.

Yay: ChatGPT can write your CV for you

Let’s be honest, nobody likes writing their CV. ChatGPT can save hours of tedious work. But, did you know that it’ll hoover up your most personal details while doing so?

Yikes: ChatGPT can suck up all your personal deets

You’re feeding it your name, address, contact details, and work experience. That means you’re essentially handing over an identity theft starter kit (deluxe edition).

Pro tip here is to use ChatGPT sparingly. Don’t feed it anything you wouldn’t want to appear in the public domain.

Code reviews

Meet Kim, a programmer who was under pressure to deliver a codebase that was both robust and efficient. She thought ChatGPT was the perfect tool to help her with code reviews. It was fast, and saved her a lot of time.

ChatGPT would learn from her code, too. It could then use her code to improve its own algorithms.

Yay: ChatGPT can check and beef up your code

ChatGPT can review and improve your code. And code that’s optimized, streamlined, and bug-free is a win for you and your organization.

In fact, low-code and no-code platforms are already helping business users build their own software applications, with little or no knowledge of coding. Clever.

Yikes: ChatGPT can use your code to beef up their own

But, did you know that it can also use your code to improve its own coding capabilities? By analyzing your input, ChatGPT can level up its algorithms . . . and become even more powerful. And not necessarily in a helpful way.

And this is all equally as relevant to phishing emails. As in… if you feed it the ‘right’ stuff, ChatGPT is more than capable of writing malicious code.

Similarly, using low-code and no-code platforms will reach a tipping point. This is because as generative AI is added to them, it will supercharge the use of low-code and no-code systems among business users who are not expert programmers. 

But whether we’re talking general coding or phishing emails, it’s worth remembering that ChatGPT sometimes makes mistakes. It’s true to say that this can often be blamed on unclear prompts, but the potential for mistakes is there, nonetheless.

These can introduce bugs to your site, or perhaps even take it down altogether. 

It’s very much within the realms of possibility (as in it’s pretty much on the cards) that ChatGPT is going to evolve massively. So don’t get caught out. 

Word to the wise: Keep your code to yourself. If you must share it, make sure it’s with someone you trust, and keep an eye out for any strange behavior from ChatGPT.

Legal documents

Meet Jasmine, a lawyer working on a high-stakes case. She needed to draft a legal document quickly, and decided to use ChatGPT. 

But Jasmine could have unwittingly revealed a confidential legal strategy. Had she done that, Jasmine could lose both her case and her job, because of her mistake. Oh, and the strategy could then be in the public domain, as a special bonus.

Yay: ChatGPT can assist with legal docs

Drafting legal documents is a tricky business. You need to ensure everything is accurate, watertight and confidential. ChatGPT can help draft legal documents in a fraction of the time.

Yikes: ChatGPT can let slip confidential legal tactics

But with that comes the danger of accidentally revealing confidential legal strategies. One slip of the keyboard and your strategy could be out in the open for all to see.

Training

Meet Asha, a human resources manager in charge of training. She relied on ChatGPT to create interactive, personalized training  for new hires.

But she’s since discovered the AI could peek at people’s private work, and violate their privacy. That’s not very HR-friendly.

Yay: ChatGPT can help train and develop people

Training and developing people is crucial for any organization. ChatGPT can help with that.

Yikes: ChatGPT can peek at their private work and behavior

But, ChatGPT also has access to that data. If it falls into the wrong hands, your people’s sensitive data could be exposed. And that’s not even mentioning the potential for bias in the training provided by an AI.

Translation

Meet Jia, a translator who works for a multinational corporation. She thought ChatGPT was her perfect solution for translating all kinds of documents. She was happy that she could finally focus on other tasks instead of spending long hours translating.

But one day, she accidentally uploaded a highly confidential document, which included sensitive trade secrets. Had she done that in the real world, ChatGPT could have automatically saved it. Who knows where they would have ended up.

Yay: ChatGPT translates all your important info for you

Sure, ChatGPT can quickly and easily translate important documents. It’s fast, efficient, and can help your organization expand its reach in ways you never thought possible.

Yikes: ChatGPT can stash all your top-secret info

But, as it does so, it stores that information. ChatGPT has access to everything you feed it. And if you’re not careful, that could mean giving away trade secrets or confidential information without even realizing it.

We asked ChatGPT for its comment

We believe in a fair world, so we wanted to give ChatGPT the right of reply. 

And we asked for its reply in poem form. Because… why not?

And, well, you can see for yourself…

Potential risks, we must be aware

ChatGPT, a machine that’s smart and rare

Translation, CV writing, and code reviews

All can be helpful, but may lead to bad news

Copy editing and customer service too

Sensitive data, they may expose to view

Legal documents, and financial reporting

ChatGPT’s assistance, we must be monitoring

Employee training, a potential bias

Data breaches, we can’t be oblivious

Great power, comes with great responsibility

Let’s use ChatGPT, with caution and ability.

So, on the one hand, artificial intelligence (which yes, means any AI chatbot and also includes Google Bard and Bing Chat) and OpenAI as a company, is all very cool. On the other hand, it’s kind of scary. 

But, hey, isn’t that a bit like life? Full of contradictions and uncertainties? If you’re reading this, you’ve handled that mix alright so far.

Just remember. . .

Is ChatGPT safe to use? Yes. If you use your brain alongside it. And as long as you remember that AI presents potential pitfalls and risks.

Which is why a focus on security culture is so crucial for building a strong defense against cyber threats.

While AI may have its strengths, it’s important to remember humans will always have the advantage when it comes to creativity, empathy, and ethical decision-making. So embrace your role in optimizing people- power by shaping a secure and resilient cyber environment.

If you want to stay ahead of the curve and be prepared for the future of cybersecurity, we highly recommend checking out CybSafe’s 2023 Security Awareness Predictions Report

Trust us, it’s AI-mazing!

2023 security awareness predictions
Behave Hub newsletter CybSafe

Do one more thing right today. Subscribe to the Behave newsletter

You may also like