ThinkSet Magazine

AI and Data Protection in 2025: Everything That Rises Must Converge

Winter 2025

In 1965, Flannery O’Connor published the short story collection Everything That Rises Must Converge, reflecting her worldview that separate ideas often converge at a common meeting point.

I suspect that, in 2025, we are at the dawn of such a convergence with respect to the regulation of artificial intelligence (AI), data privacy and protection, consumer safety, and information security.

Here’s why—and how business leaders can prepare for what is to come.

A Growing Patchwork of Laws: Navigating Today’s Privacy, Data Protection and AI Regulations

Numerous emerging and existing laws around the world present a complex tapestry of (often) overlapping obligations for businesses.

Global privacy laws such as Europe’s General Data Protection Regulation (GDPR) and its corollaries (e.g., the Brazilian General Law for the Protection of Privacy, Chinese Personal Information Protection Law, Indian Personal Data Protection Bill) have for some time regulated the use of personal data in reaching automated decisions that impact the rights and freedoms of individuals, such as in credit scoring and applicant screening.

Meanwhile, US legislators and privacy regulators are increasingly focused on automated decision-making technologies (ADMT) that affect job applicants, employees, consumers, and patients. For instance:

With US federal AI legislation unlikely to emerge any time soon, this patchwork of laws will continue to expand. Several states have formed advisory councils and task forces to study potential risks and impacts of AI.

This burgeoning mélange of laws in the US and across the globe is likely to create uncertainty and (sometimes) frustration for the business community, which would prefer a clear set of standards with which to comply.

Are Organizations Prepared? Addressing AI Adoption and Compliance Gaps

This regulatory convergence comes, of course, as organizations begin to ramp up adoption of AI and large language models.

This has resulted in employees using AI tools without appropriate oversight and developers speeding up coding without considering whether model-generated code meets current security standards.

For instance, BRG’s 2024 Global AI Regulation Report found that only four in ten executives are highly confident in their organizations’ ability to comply with current AI regulations—and that less than half of organizations have implemented internal safeguards to promote effective AI development and use.

Good Governance Is More Important Than Ever

The best answer to the chaos is the same for AI as it has long been for data privacy and security: governance.

A strong AI governance program should be designed based on principles shared among the various legislative and regulatory approaches in the US and abroad. Privacy, safety, security, transparency, explainability, and nondiscrimination requirements should be managed systematically and involve cross-functional teams of stakeholders. Strategic AI governance considers ethics and legal risk across disciplines—not merely laws explicitly aimed at AI—and equally values innovation and risk management.

Our hope for 2025 is that more organizations will approach AI purposefully and design governance with sufficient flexibility to adapt to the fragmented regulatory landscape.

After all, driving full speed ahead into the AI age without establishing effective governance is like racing a Ferrari without brakes.

Done well, AI governance can enable speed and provide the strategic control necessary to uphold good business practices.