While the use of generative AI platforms has led to increased productivity and innovation, it has also led to real concern, especially when it comes to cybersecurity and data protection. Security and IT teams need to find ways to protect their organization from data loss and evasive phishing attacks while still enabling the use of generative AI.
In the initial OpenAI ChatGPT release, they became the fastest-growing platform in history, amassing over 100 million users in just two months. Fast forward to November 2023, OpenAI announced they have 100 million active users weekly. ChatGPT continues to capture global attention, and it’s just one of many generative AI platforms that are being used daily.
Generative AI platforms are transforming the way people work, improving content, facilitating brainstorming, outsourcing mundane tasks, and so much more. These capabilities create opportunities for organizations to increase productivity and the quality of work.
However, the use of generative AI platforms and chatbots like ChatGPT have a significant impact on cybersecurity as well, raising questions around both the privacy and security of data and the threat of phishing attacks. Generative AI security risks are on the rise.
Many people are understandably nervous about generative AI tools, because of their ability to help threat actors develop evasive threats at greater scale and speed. Additionally, platforms like ChatGPT have lowered the barriers for hackers to launch more sophisticated and effective phishing attacks. However, while cybersecurity experts are right in warning the world about the risks ChatGPT poses to organizations and individuals, they might miss a more immediate concern about these generative AI platforms and chatbots: the potential loss of proprietary data or other intellectual property (IP).
When employees use generative AI tools at work, they might unintentionally share and expose sensitive company data. This data might include customer data, trade secrets, classified information, and intellectual property.
In a recent report, Menlo Security analyzed how often employees were attempting to input sensitive and confidential information into generative AI platforms. In a 30-day period, 55% of DLP events included personally identifiable information. These organizations have Menlo Security in place that blocked these instances.
With generative AI, security and private data has the potential for much greater exposure than it does in other data loss scenarios, such as a breach or inappropriate sharing. That’s because generative AI platforms use chat histories and other data to train and improve their models, which means that any data entered could be exposed later to other users.
Real world example:
It was recently reported that a group of engineers from Samsung’s semiconductor group entered source code into ChatGPT to see if the code for a new capability could be made more efficient. ChatGPT and other generative AI tools work by retaining input data to further train themselves, and the Samsung source code that was entered can now be used by the platform to formulate a response to queries from other users. This could include a threat actor looking for vulnerabilities or a competitor looking for proprietary information.
To protect their organizations, some companies have outright banned generative AI sites due to generative AI security risks. Italy banned ChatGPT for the whole country over concerns around data privacy, though service was restored after about a month. While preventing access to generative AI services may seem like a solution to its potential security risks, it’s just a quick fix instead of a long-term solution. ChatGPT and the myriad of other generative AI platforms are powerful business tools that people can use to streamline business processes, automate tedious tasks, or get a head start on a writing, design, or coding project. Blocking these sites also blocks productivity and business agility.
Secure generative AI is utilized in applications such as content generation, image synthesis, and data augmentation while ensuring protection against adversarial attacks and data manipulation.
Unfortunately, data loss prevention (DLP), cloud access security broker (CASB), and other insider threat solutions are not capable of dealing with the nuances of new AI technologies. Organizations need a layered approach instead of focusing on a one-size-fits-all solution.
In combination with DLP, organizations can limit what can be pasted into input fields, either restricting character counts or blocking known code, for example. No one is going to manually type in thousands of lines of source code, so limiting paste functions effectively prevents this type of data exposure. It may make users think twice about the information they were trying to enter.
Organizations should also apply security policies that trigger additional generative AI security controls — such as event logging or initiating a browser recording — to aid in resolution and post-event analysis. It’s important to remember that investigations into breaches caused by insiders must provide proof of intent. Recording events and browser sessions could provide visibility and insight into whether users were malicious or simply negligent.
Menlo Security Last-Mile Data Protection provides reliable inspection of web file uploads and user input for every browsing session. This protection stops employees from uploading sensitive files, or inputting trade secrets and other sensitive information into generative AI solutions.
Copy-and-paste control and character limits can be layered in as additional security steps to stop large amounts of data leakage. These controls prevent sensitive data from being exposed to an external site where it can be misused.
In addition, Menlo Browsing Forensics enables investigation teams to view recorded web session playbacks, such as mouse clicks and data entry, as they occurred to understand the intent and impact of end-user actions. Each recorded session has a Menlo Forensics Log entry that includes supporting data of the event and one-click access to the recording. The recorded sessions are transferred to a customer’s defined location for secure, access-controlled storage.