Skip to main content

‘First published on India Business Law Journal

By: Pravin Anand and Dr. Ajai Garg

Artificial Intelligence (AI) is fuelling one of the most significant investment booms in modern history. Major American tech companies are projected to spend nearly USD400 billion this year on the infrastructure running AI models. Developers such as OpenAI and Anthropic are raising billions of dollars. It is estimated that global spending on data centres will exceed USD3 trillion by the end of 2028.

This rapid development of the technology, however, poses severe risks. Experts predict that superintelligence in AI will outperform humans in all cognitive tasks within the next five years. The danger is that no government, company or individual will be able to control AI systems that are significantly more capable as compared to humans in many areas. Some even predict that if a superintelligent system is able to achieve goals incompatible with human life, then without any significant controls and regulations, humanity may become extinct.

Adding to this risk are AI developers who themselves do not fully understand how such powerful systems work. Modern AI models use vast datasets in ways their creators cannot follow. The CEO of Anthropic has stated that developers only understand about 3% of their systems’ workings.

Despite acknowledging these dangers, leading AI companies, including OpenAI, Google and Meta, are working nonstop to achieve superintelligence. This is a moral dilemma in which innovators are alarmed by their creations yet accelerate development. AI pioneer Geoffrey Hinton puts the chances of human extinction due to the technology at 10 to 20%; his colleague Yoshua Bengio considers the risk to be even higher.

The paradox is based on competitive logic. Companies and nations fear that if they pause or slow down, others will continue unabated, claiming the benefits of any breakthrough. Their focus on technological advance precludes adequately addressing safety concerns. A coalition of leaders, including Nobel laureates and former government officials, is calling for a global prohibition of superintelligence. However, geopolitical tensions and regulatory inadequacy make this unlikely. It is imperative that innovation and human wants be balanced to achieve the ethical growth of AI. Only global collaboration will implement safe, secure, responsible and ethical principles that make the benefits of AI available for all.

Although an international ban on superintelligence does not seem possible, history provides a successful precedent. In 1987, at the height of the Cold War, the world banned the use of chlorofluorocarbons, or CFCs, that threatened millions with skin cancer and blindness. This came just two years after the hole in the ozone layer was made public. Scientists, NGOs and citizens mobilised a global movement that resulted in the signing of the Montreal Protocol. As did the ozone threat, superintelligence endangers everyone on the planet. A loss of control will spare no one. The shared risk of extinction should unite all political and ideological persuasions.

Superintelligence poses unique legal and societal challenges. Unlike previous advanced technologies, superintelligent systems create information and act autonomously on it. They are not neutral; their actions are shaped by their stored values. Without alignment between such systems and human ethics, morals and laws, they will likely cause significant harm to societies and economies.

Aligning AI with human values may be a constitutional necessity rooted in the protection of fundamental rights and democratic ideals. AI must be forced to adhere to laws and legal codes. By incorporating legal standards and laws, AI will approach novel situations in a human way. The question then is which jurisdiction will set the politics and laws of the framework. The responses of these systems will be based on their creators’ standards. Ensuring that such mechanisms align with democracy-derived law and protect the fundamentals of sovereignty is essential to foster inclusion and human progress.

The legal community will be pivotal in shaping the future of an AI reflecting collective ideals. Thoughtful legislation, rigorous interpretation of existing jurisprudence and ethical considerations will ensure that the rise of superintelligent systems enhances human prosperity while adhering to the constitutional principles that have guided India. These tenets will contribute to a future in which technology and humanity coexist in harmony. The upcoming AI Summit in New Delhi provides one such opportunity where world leaders, legal and technical experts can mandate a framework for global regulations.

Most Recent

News & Insights

VIEW ALL
Thought Leadership
Dec 19, 2025

First published on Express Computer. Authored by Subroto Kumar Panda The notification of the Digital Personal Data Protection (DPDP) Rules, 2025, marks

The DPDP: An 18-month compliance imperative for the C-suite
News & Updates, Thought Leadership
Dec 16, 2025

‘First published on India Business Law Journal’ By: Pravin Anand and Dr. Ajai Garg Artificial Intelligence (AI) is fuelling one of the most significant

Law can keep us safe from superintelligence
News & Updates
Dec 05, 2025

The High Court of Delhi in a significant interim ruling, “AB SKF vs M/S PARAMOUNT BEARING CO. & ORS.”, CS(COMM) 963/2025, dated 19/11/2025 has clarified

Distinction Between Order 38, Rule 5 and Order 39, Rules 1-2 CPC in the Context of “Maintenance of Status Quo”
News & Updates
Nov 26, 2025

Authored by Pravin Anand There are areas of intellectual property law where one can sense, quite literally, the convergence of disciplines that do not

When Art Meets Science in Trademark Law: Reflections on India’s First Smell Mark