Opinion

AI governance… A necessary good

August 07, 2025

Firas Tarabulsi

Artificial intelligence is no longer a purely technical matter, nor merely a development in tools of administration and productivity. It has gradually become an influential actor in shaping public decision-making and a silent component embedded within the fabric of policy, economics, and communication.

As AI is increasingly deployed across critical sectors—from justice to education, from security to media—the urgent need arises to recalibrate the relationship between humans and algorithms, not only through the lens of efficiency, but also through the imperatives of sovereignty, accountability, and fairness. In this context, AI governance is no longer an optional regulatory step; it is a national and strategic priority.

Notably, Saudi Arabia—positioned at the forefront of future-focused nations—has recently shown a growing strategic caution regarding certain negative uses of AI, particularly those that could manipulate public opinion or unduly influence collective behavior. This growing vigilance has been reflected in official statements, regulatory decisions, and even the suspension of cooperation with certain AI systems that have yet to meet ethical or legal oversight.

In a world where technology now outpaces regulation, remaining silent about the unchecked expansion of intelligent tools poses a structural risk—not just to individuals, but to the integrity of decision-making itself. The question is no longer whether AI should be regulated, but whether we can afford the consequences of leaving it unregulated.

What raises the greatest concern today is not AI’s superior performance in tasks, but its silent infiltration into decision-making spaces without transparency or accountability. Who is to blame when a qualified job applicant is rejected by an opaque algorithm? Who bears responsibility when AI-generated content is used to mislead the public? What if a system causes financial or reputational harm due to bias?

This is not merely a technical issue—it is legal at its core. That is why our relationship with AI must be redefined. These are no longer support tools, but active players that must be embedded within a clearly defined institutional and legal framework.

AI governance thus emerges as the most pressing and sensitive concept of this stage.

AI governance can be defined as:

“A set of policies, controls, and legal and ethical standards that regulate the development, deployment, and oversight of AI systems. It ensures accountability, transparency, fairness in outcomes, and the ability to interpret and intervene in automated decisions, all within a regulatory environment that safeguards individual rights, public interest, and national sovereignty over technical systems.”

In this sense, governance is not the antithesis of innovation—it is the compass that prevents innovation from straying into harmful terrain.

Accordingly, the state must urgently launch a national governance framework for AI—not as a bureaucratic formality, but as a legal and ethical shield that protects decision-making, society, and sovereign control. This framework should include tools for impact assessment, mechanisms to monitor algorithmic drift, and authority to immediately suspend or reject any system proven to be biased or unsafe.

AI-powered projects must also be contractually bound to clear accountability terms. No AI system should be deployed in public or private sectors without mandatory clauses on transparency, risk acknowledgment, and liability.

With its strong institutional landscape and progressive regulatory vision, Saudi Arabia is well positioned not only to keep pace with global developments, but to lead them. What is needed is not just awareness of the risks, but their translation into enforceable regulations, executive instructions, and a cultural shift in how institutions manage innovation.

In this light, several practical and actionable recommendations can be proposed:

• 1. Establish an independent national AI governance unit within state regulatory bodies, with authority to review and approve AI systems prior to deployment in public or quasi-public institutions.

• 2. Require companies and developers to submit AI Impact Reports before launching any intelligent system, with such reports integrated into licensing procedures.

• 3. Issue a regulatory charter to govern the contractual relationship between users and AI service providers, including detailed liability clauses for non-human errors.

• 4. Support the creation of a National Center for Tech Ethics, responsible for issuing binding principles and maintaining an accreditation list of systems meeting legal and ethical standards.

Building a national AI governance framework should not stop at internal organization; it must extend to shaping a broader cultural understanding of AI across the Arab region.

Saudi Arabia, through institutions like the Saudi Data and AI Authority (SDAIA), the National Cybersecurity Authority, and academic centers such as KAUST and the King Abdullah Institute for Artificial Intelligence, has the capability to produce a regional governance model—one that promotes ethical innovation, controls harmful use, and educates future generations on responsible AI grounded in law and values.

With its rising regulatory capacity and visionary digital transformation, the Kingdom is poised not merely to adapt to AI, but to shape how it is adopted—especially in societies that share our language, reality, and future.

Progress is not measured by who owns the fastest algorithm, but by who steers it with wisdom.

And those who embrace AI without governing it may one day find their decisions shaped by technology—before policy and law have their say.


August 07, 2025
85 views
HIGHLIGHTS
Opinion
24 days ago

AlUla’s families at the heart of Vision 2030: A model for regenerative growth

Opinion
70 days ago

Driving transformational change in healthcare institutions: From vision to impact

Opinion
74 days ago

Trump’s Riyadh visit: A signal of global recalibration