NEWS:
Menlo Security announces strategic partnership with Google
Icon Rounded Closed - BRIX Templates

ChatGPT one year later: Challenges and learnings

Menlo Security
|
December 5, 2023
linkedin logotwitter/x logofacebook logoSocial share icon via eMail

Last Thursday marked the one-year anniversary of the launch of ChatGPT. Immediately, the first publicly-available generative artificial intelligence (GenAI) tool sparked immense interest in the rise of artificial intelligence (AI) and machine learning (ML) and forever transformed how we work. But, despite the quick acceptance, the technology faced problems, causing organizations to reconsider if users could safely use GenAI tools.

In a famous example, a Samsung engineer pasted internal source code into ChatGPT in an effort to identify errors. The engineer improved the code. However, there is now a risk of sharing sensitive engineering data with competitors. This can happen when training models or serving information to other users. The Internet is permanent, so Samsung may not be able to erase the data from these models. This is true even if ChatGPT owners are willing to help.

What we’re seeing in the world of GenAI risks and rewards

One year later, and it’s clear that the GenAI landscape is in constant flux. To protect data and sensitive information, it's crucial to safely integrate this large language model used worldwide. Here are five trends around GenAI tools that we’re seeing and how they are impacting organizations around the world:

1. Diversifying the GenAI toolbox beyond ChatGPT

Everyone knows ChatGPT, but dozens of other publicly-available GenAI tools have hit the market in the past year. Developers have the option to use GitHub Copilot, PolyCoder, and Cogram for generating code. On the other hand, content creators can utilize DreamFusion, Jukebox, NeuralTalk2, and Pictory for generating media. No matter what you do, there's a GenAI tool to help you work more efficiently.

2. Frequent use: Unveiling GenAI’s stickiness metric

While the extremely high usage of ChatGPT and other GenAI tools we saw at first has slowed down, these platforms are still being utilized frequently. A recent report shows that on average users visit GenAI platforms 32 times per month – an impressive stickiness metric. No doubt that this loyalty will lead to future adoption and growth in the world of AI and large language models.

3. Balancing productivity and concerns

ChatGPT and other GenAI tools are game-changing technology – forever changing the way people work. Many news articles have discussed how businesses have become more productive in the past year.

However, ChatGPT and its GenAI counterparts are not without ethical dilemmas. Ethical AI concerns, privacy issues, and turmoil at OpenAI, the owner of ChatGPT, have clouded perceptions. As they bring forth productivity gains, the delicate equilibrium between positive impacts and ethical considerations surrounding these powerful tools becomes crucial. Safeguarding against the misuse of training data and potential exposure of sensitive data requires a thoughtful approach.

4. Addressing security concerns

It wasn’t just Samsung that raised concerns about GenAI risks to their business. Many other organizations and governments have decided to inhibit or ban the use of GenAI tools.

Limiting the use of a powerful productivity tool is likely to turn into a competitive disadvantage. However, a nuanced strategy is essential, striking a balance between productivity enhancement and mitigating security risks. With a strategy like this, organizations can allow safe and ethical use of GenAI tools in the workplace.

5. Still no clear guidance on securing GenAI tools for the future

The issue is that organizations lack guidance on how to use GenAI tools in a safe, secure, and ethical manner. We need to amend existing acceptable use, privacy, and security policies to reflect new realities. This involves teaching users, improving data loss prevention (DLP) policies, and gaining more insight and control over how users publicly use GenAI tools.

Now onto the second year

As we all become more familiar with ChatGPT and other GenAI platforms, there are a few assumptions we can make about the future of this technology:

  1. More GenAI platforms will launch, with specialized purposes. This surge in diversity promises innovation and heightened efficiency within the GenAI landscape. At the same time, market forces will lead to many ceasing to exist and/or consolidating.
  2. As the GenAI algorithms become increasingly fine tuned, users will experience a shift in how they utilize them. With a deeper understanding of GenAI functionalities, users will not only automate their mundane tasks but also cater to hyper-specific needs. For example, product teams can leverage GenAI to help enhance and expedite their product development processes.
  3. In the evolving landscape, security and IT teams will feel empowered to find effective technologies that protect their organizations without having to lock down GenAI tools. This integration will ensure that as GenAI continues to advance, security measures will also follow.

ChatGPT and other GenAI tools are changing the world, but their quick adoption at this stage is causing problems. Organizations should adjust their security strategies to enable secure and safe access to GenAI tools in the workplace. Learn more about how you can secure user access to ChatGPT and other GenAI tools.

Blog Category
Tagged