
The swift adoption of GenAI technologies has far outpaced the implementation of data governance and security measures around its use, but governing bodies are closing the gap. Enterprises that are ill-prepared for the onslaught of new regulations face the prospect of considerable work – and the possibility of hefty fines – in the very near future.
The global landscape of AI regulation is fragmented, but common themes are emerging. There is widespread recognition of the need for robust data governance, often building upon existing data protection laws, such as the European Union’s GDPR and China’s PIPL. The need for security and risk management is also a point of agreement, reflected in the rise of AI Safety/Security Institutes and the influence of guidelines like the NIST AI Risk Management Framework (RMF).
Today’s companies – especially those operating in multiple countries where cross-border controls are essential – must build guardrails to extend security and data governance to the use of GenAI or risk compromising their own policies as well as their compliance with governing bodies.
By 2027, more than 40% of AI-related data breaches will be caused by the improper use of generative AI (GenAI) across borders, according to Gartner, Inc.1
The cost of noncompliance can be steep, leading to significant financial, reputational, and legal consequences for businesses. GDPR violations, for example, can result in fines reaching up to four percent of a company’s global annual revenues or €20 million, whichever is higher.
Any enterprise that does business across regions has to take compliance seriously, as unintended cross-border data transfers can occur, particularly when GenAI is integrated in existing products. Organizations must ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data.
The global landscape of AI regulation in 2025 is characterized by significant fragmentation. While jurisdictions like the EU and South Korea have enacted comprehensive, legally binding frameworks, many others, including major players like the U.S. (at the federal level), the UK, Canada, and Australia, as well as developing economies in the Middle East and Africa, are pursuing more flexible, principles-based, sector-specific, or guideline-driven approaches. National strategies are proliferating, tailored to specific economic ambitions (UAE, KSA, Japan, Singapore), developmental needs (India, African nations), or balancing acts between innovation and perceived risks (UK, South Korea, Australia).
Despite the diversity, common themes are emerging. There is widespread recognition of the need for robust data governance, often building upon existing data protection laws (including GDPR, PIPL, PDPL, and POPIA). Principles of transparency and fairness/non-discrimination are almost universally acknowledged as important, although their implementation varies from binding requirements in high-risk contexts (EU, South Korea) to voluntary ethical guidelines (Australia, Japan, India).
However, areas like intellectual property (especially concerning training data and AI-generated outputs) and liability/accountability for AI-induced harms remain largely unresolved globally, marked by significant legal uncertainty and divergent national stances. The enforcement mechanisms also vary dramatically, from dedicated AI regulators with substantial fining powers (EU) to reliance on existing sectoral bodies (UK, Australia), or relatively light-touch approaches (South Korea's low penalties, voluntary codes in Canada).
The influence of the “big three” models – the EU's comprehensive regulation, the U.S.'s market-driven approach (currently deregulatory at the federal level), and China’s state-controlled adaptive system – is evident, with other nations often positioning their strategies in relation to these poles.
Key trends to watch:
Global regulations are of particular concern if your company operates in multiple countries. Unfortunately, the lack of consistent global best practices and standards for AI and data governance exacerbates compliance challenges by forcing enterprises to develop region-specific strategies. But it’s important to establish governance frameworks that not only comply with new and emerging requirements but also enable the responsible and accelerated adoption of AI.
Due to the complexity of aligning AI with the evolving regulatory landscape, businesses generally anticipate a minimum of 18 months to effectively implement AI governance models.2 But it’s important to move swiftly, as Gartner predicts that by 2027, AI governance will become a requirement of all sovereign AI laws and regulations worldwide.
One of the biggest issues with maintaining any type of regulatory compliance has been the difficulty of proving that your policies are in place and that they are working. Because so much of GenAI activity happens in the browser, we recommend taking these steps now to be ready for audits later:
To find out more about GenAI in the workspace and how effective telemetry and visibility can help you stay compliant, read our 2025 Report: How AI is Shaping the Modern Workspace here.
-------------
1 Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027, February 17, 2025
2 From Hype to Impact: How Enterprises Can Unlock Real Business Value with AI, EPAM Systems, April 2025
Menlo Security
