Menlo Security Cloud Security Platform is FedRAMP® Authorized

How do I secure generative AI?

Secure access to generative AI

While utilization of generative AI platforms has led to increased productivity and innovation, it has also led to real concern - especially when it comes to cybersecurity. Security and IT teams need to find ways to protect their organization from data loss and evasive phishing attacks while still empowering the use of generative AI.

The opportunities and risks of generative AI

Since its release in November 2022, ChatGPT has become one of the fastest growing platforms in history, collecting over 100 million users in just two months. In comparison, TikTok took 9 months and Instagram took 2.5 years to reach the same amount of users. ChatGPT has captured the attention of people globally and it’s just one out of many generative AI platforms that are being utilized daily. These platforms are transforming the way people work – improving content, helping with inspiration and brainstorming, outsourcing mundane tasks and so much more. This creates a great deal of opportunities for organizations to increase productivity and the quality of work.

However, the use of generative artificial intelligence (AI) platforms and chatbots like ChatGPT have a significant impact on cybersecurity as well – raising questions around both the privacy and security of data and the threat of phishing attacks.

The threat of phishing attacks

Many people, rightly so, are nervous that the tools will allow threat actors to develop evasive threats at an alarming scale. Additionally, platforms like ChatGPT have lowered the barriers for hackers to launch more sophisticated and effective phishing attacks. However, while cybersecurity experts are right in warning the world about the risk ChatGPT poses to organizations and individuals around the world, they might miss the more immediate negative aspect of these generative AI platforms and chatbots – the potential loss of proprietary data or other intellectual property (IP).

The risk of data loss

As employees use generative AI tools, such as ChatGPT and Bard, they might be sharing and exposing sensitive company data. This could be customer data, trade secrets, classified information, and even intellectual property. With generative AI, private data has the potential to reach a much wider audience than other typical data loss avenues. ChatGPT and other generative AI platforms save data, such as chat history, to train and improve its models. That means any data that was input could be used to train the models and potentially exposed later to other users.

Real world example:

It was recently reported that a group of engineers from Samsung’s semiconductor group inputted source code into ChatGPT to see if the code for a new capability that the company was developing could be made more efficient. ChatGPT and other generative AI tools work by retaining input data to further train itself, and the Samsung source code that was inputted can now be used to formulate a response to requests from other users. This includes a threat actor looking for vulnerabilities or a competitor looking for proprietary information.

To protect their organizations, some companies have outright banned generative AI sites. Italy briefly banned ChatGPT for the whole country over concerns around data privacy – service was restored after around a month. While preventing access to generative AI services may seem like a solution to the potential security risks, it’s just a quick fix instead of a long term solution. ChatGPT and the myriad of other generative AI platforms are powerful business tools that people can use to streamline business processes, automate tedious tasks or get a head start on a writing, design or coding project. Blocking these sites will also block productivity and business agility.

Securely enable generative AI

Unfortunately, existing data loss prevention (DLP), cloud access security broker (CASB) and other insider threat solutions are not enough to deal with the nuances of this new technology. Organizations need a layered approach instead of focusing on a one size fits all solution.

In combination with DLP, organizations can limit what can be pasted into input fields – either restricting character counts or blocking known code, for example. No one is going to manually type in thousands of lines of source code, so limiting paste functions effectively prevents this type of data loss. It also would make users think twice about the information they were trying to input.

Organizations should also apply security policies that trigger additional security controls – such as event logging or initiating a browser recording – to aid in resolution and post event analysis. It’s important to remember that investigations into breaches caused by insiders must provide proof of intent. Recording events and browser sessions could provide visibility and insight into whether users were malicious or just negligent.

Why Menlo?

Menlo Security provides reliable inspection of web file uploads and user input for both isolated and non-isolated browsing sessions, stopping employees from uploading sensitive files, inputting trade secrets and more into generative AI solutions. Coupled with Copy & Paste Controls, Menlo Security protects sensitive data from being exposed to an external site where it can be misused, but still allows users to copy results from sites. Browser Forensics enables security teams to “playback” actions – such as mouse clicks and data entry – as they occurred within a web session to understand the intent and impact of end user actions.

Make the secure way to work the only way to work.

To talk to a Menlo Security expert, complete the form, or call us at (650) 695-0695.