world tour:
Join us for a live look at how Menlo’s Secure Enterprise Browser puts you ahead of attackers
Icon Rounded Closed - BRIX Templates

How to Securely Enable Generative AI within the Public Sector

Negin Aminian
|
January 16, 2024
linkedin logotwitter/x logofacebook logoSocial share icon via eMail

Generative artificial intelligence (AI) has captured the world’s collective attention. Millions of people use GenAI services everyday, and we’ve only just started to discover the impact it could have on our organizations. The increase in productivity and support for innovation that generative AI platforms, like ChatGPT, offer is already changing the way we work, especially within the public sector. Generative AI can help agencies make better and faster decisions around public policies, services, and programs. Whether it’s automating mundane tasks in order to free up a public servant’s time or improve a public service, the impact is significant. The Boston Consulting Group estimates the productivity gains of generative AI for the U.S public sector will be valued at $519 billion per year by 2033.

The opportunities of generative AI within government

The public sector is still in its early stages of understanding how to utilize generative AI within various functions, however the Boston Consulting Group has defined a few use cases for government:

  • Expanding capabilities for policy development 
  • Enhancing service delivery outcomes
  • Improve internal workings of government 
  • Streamline regulation development, compliance, and reporting 
  • Accelerating whole-of-government strategies and policies

As agencies work to enable these use cases at scale, the impact on cybersecurity posture must be considered at each phase of adoption. While generative AI can deliver significant productivity and support innovation, the data of citizens and the government need to be protected. With GenAI, private data has the potential to reach a much wider audience than other typical data loss avenues. ChatGPT and other such platforms save data, such as chat history, to train and improve its models. That means input data could be used to train the models and potentially exposed later to other users. Additionally, platforms like ChatGPT have lowered the barriers for hackers to launch more sophisticated and effective phishing attacks because these tools can produce very persuasive and correct writing in many languages.

Agencies must ensure they are able to balance the positive impact that generative AI can have on citizens, businesses, and government, while stopping the potential security risks.

The executive order on AI

On October 30th, 2023, the President published an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” which placed the highest urgency around safe and responsible use of AI. While this executive order focuses on artificial intelligence, there is a key aspect on generative AI:

“agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI” but instead are urged to put appropriate safeguards in place to utilize generative AI “at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.”

To protect users and data, some agencies might want to block the use of generative AI. However, not only would this approach significantly block innovation and productivity, it is now discouraged by the Executive Order.

How to securely enable generative AI within agencies

Agencies can find a way to secure generative AI use in a way that aligns with the Executive Order with:

A layered approach for data loss

Most organizations will adopt data loss prevention policies (DLP) as guardrails for GenAI.  But DLP alone is not enough due to the varied avenues that users input data. Instead agencies must adopt a layered approach with new capabilities that address the specific ways that generative AI platforms are utilized.

Protection on a group level vs a domain level

When adopting technology as a safeguard for generative AI, it’s extremely important to enable policies at a generative AI group level versus on a domain by domain basis. It would be nearly impossible for security and IT teams to consistently and constantly update policies as new generative AI platforms arise.

Protection from internet-borne threats

Just as generative AI is being used to improve processes within agencies, bad actors are using generative AI to improve the quality and speed of their attacks. Agencies need to adopt technology that protects against internet-borne threats so that no matter how much bad actors improve, users and data remain protected.

The most trusted name in Browser Security

Menlo Security enables agencies to safely adopt generative AI and protect against both data loss and phishing attacks. Menlo Security helps protect agencies against data loss risk by controlling the data input into ChatGPT and other generative AI tools, using Menlo Security Data Loss Prevention (DLP), Copy & Paste and Character Limit Controls and Browser Forensics. For evasive phishing attacks, Menlo Security helps protect agencies from zero-day and known web exploits, weaponized documents, password protected archives with malicious payloads, and obfuscation techniques.

AI governance is iterative, but it’s important to start taking action to protect agencies. To learn how Menlo Security can help, join us for a personalized demo.

Blog Category
Tagged