world tour:
Join us for a live look at how Menlo’s Secure Enterprise Browser puts you ahead of attackers
Icon Rounded Closed - BRIX Templates

The opportunities and risks of ChatGPT in cybersecurity

Mark Guntrip
|
June 4, 2023
linkedin logotwitter/x logofacebook logoSocial share icon via eMail

A lot of ink has been spilled over the past several months about the impact generative artificial intelligence (AI) platforms and chatbots like ChatGPT will have on cybersecurity. Many people, rightly so, are nervous that the tools will allow anyone with an Internet connection and a malicious motive to develop evasive threats at an alarming scale. Imagine being able to produce and release thousands of individually targeted malware, phishing emails and other threats at the click of a button within minutes. It’s truly a scary thought.

However, while cybersecurity experts are right in warning the world about the risk ChatGPT poses to organizations and individuals around the world, they are missing perhaps the most terrifying aspect of these generative AI platforms and chatbots and their real impact on an organization’s security posture: the potential loss of proprietary data or other intellectual property (IP).

Samsung loses top secret data to ChatGPT

According to Tessian, there’s been a 47% increase in accidental data loss and deliberate data exfiltration by negligent or disgruntled employees over the last few years. As ChatGPT and other generative AI platforms and chatbots make it easier than ever to inadvertently expose proprietary data and IP, organizations are going to have to address the increasing security risk sooner rather than later.

Some companies, including Samsung, have learned the hard way. It was recently reported that a group of engineers from the company’s semiconductor group inputted source code into ChatGPT to see if the code for a new capability that the company was developing could be made more efficient. ChatGPT and other generative AI tools work by retaining input data to further train itself, and the Samsung source code that was inputted can now be used to formulate a response to requests from other users. This includes a threat actor looking for vulnerabilities or a competitor looking for proprietary information.

It’s not just source code that companies need to be careful about. In another instance, a Samsung executive used ChatGPT to convert notes from an internal meeting into a presentation. What if an enterprising executive from a competing company later asked ChatGPT about Samsung’s business strategy? Information from those internal meeting notes could be used to formulate a response – effectively putting Samsung’s data at risk.

And, it’s not just source material that is pasted into input fields that creates risk. The actual phrasing of requests can reveal competitive information as well. What if a CEO asks ChatGPT for a list of potential acquisition targets? Could that inform another users’ question about the company’s growth strategy? Or what if a designer uploads the company logo into an AI image generator to get some ideas for a possible redesign? Technically, the logo can then be used to generate logos for other users.

What’s crazy is that thousands of employees at companies around the world have inputted proprietary information into ChatGPT and other generative AI platforms in an effort to streamline manual, tedious tasks. The ability to use AI and machine learning (ML) to develop a first draft of code and documents such as marketing material, sales presentations and business plans is extremely helpful and tempting. But companies can’t just block ChatGPT and other generative AI platforms. They are legitimate tools that are becoming ubiquitous in today’s business environment. Not using them could be seen as an inhibitor to agility and a competitive disadvantage. In fact, the Italian government recently had to walk back a country-wide ban due to backlash from business users who felt it was putting them at a disadvantage.

Businesses are going to have to come up with a way to allow employees to use the increasing number of generative AI platforms and chatbots in a safe way that doesn’t put the organization at risk.

A lack of security controls

Unfortunately, existing data loss prevention (DLP), cloud access security broker (CASB) and other insider threat solutions are ill equipped to deal with the nuances of this new technology. Still taking a detect and respond approach, these solutions look for keywords or phrasing amongst the enormous amount of traffic flowing outside the organization. These often have to be inputted manually by security professionals and product owners – making it nearly impossible to catch everything. Even if a solution detects data exfiltration, it could already be too late. The information has been inputted and there’s no ‘redo’ button that can take it back. Your information lives inside the generative AI platform forever and will continue to inform responses.

Organizations need to prevent information from being inputted into these generative AI platforms and chatbots – but in a way that doesn’t inhibit employees’ use of these helpful tools. Organizations can do this by limiting what can be pasted into input fields – either restricting character counts or blocking known code, for example. No one is going to manually type in thousands of lines of source code, so limiting paste functions effectively prevents this type of data loss. It also would make users think twice about the information they were trying to input.

Most importantly, however, organizations should limit interaction with ChatGPT and other generative AI platforms away from the end browser. Executing app commands in a remote browser in the cloud puts an extra layer of protection between the user and the Internet, giving the organization an opportunity to stop malicious activity (whether it’s purposeful or not) before data exfiltration occurs. You can also apply security policies that trigger additional security controls – such as event logging or initiating a browser recording – to aid in resolution and post event analysis. It’s important to remember that investigations into breaches caused by insiders must provide proof of intent. Recording events and browser sessions could provide visibility and insight into whether users were malicious or just negligent.

Enable ChatGPT without putting the organization at risk

ChatGPT and the myriad of other generative AI platforms are powerful business tools that people can use to streamline business processes, automate tedious tasks or get a head start on a writing, design or coding project. Unfortunately, the information users put into these platforms and the requests themselves can be used to inform future requests – including threat actors and competitors. A preventative approach that isolates users from the Internet can augment existing detection capabilities and provide a first line of defense against this type of large-scale data loss. Knowing they are protected, this would give your employees near free reign to leverage the advantages of these innovative new tools, improve productivity and gain business agility.

See solutions: contain sensitive data, full stop.