
Artificial intelligence (AI) is often hailed as the defining technology of the 21st century, shaping everything from economic growth to national security. But as global investment in AI accelerates, many experts are beginning to ask whether the world has embarked on an AI “arms race.”
With China, the US, UK and the European Union each pledging billions to advance AI, competition in research, infrastructure and industrial applications for the new technology is intensifying. But, at the same time, regulation is struggling to keep pace with rapid development in some regions. This is raising concerns about ethical risks, economic inequality and global AI governance.
There have been rapid advances in AI in the past few years. Companies such as America’s Accenture and China’s DeepSeek have developed large-scale generative AI systems—which can learn from existing content to generate new material such as text, images, music, or videos.
The UK government recently announced its intention to “shape the AI revolution rather than wait to see how it shapes us” through its AI Opportunities Action Plan. This will have a strong focus on regulation, skills and ethical governance.
If the UK and continental Europe are prioritizing regulation, China is using its sheer size and appetite for innovation to develop rapidly into what has been described as an “AI super market,” and the US is balancing innovation with national security concerns.
China recently released details of new regulations, which come into force in September, that will require explicit labeling of AI-generated content and providing metadata to link such content to the service provider that generated it. The onus will be on platforms that feature AI-generated content to provide such information.
But the different approaches highlight the growing geopolitical dimension of AI development which risks divergence of standards. While competition can drive innovation, without international cooperation on safety, ethics and governance, the global AI race could lead to regulatory gaps and fragmented oversight.
Many analysts fear this would bring significant downsides. Most worryingly, there is the prospect of unchecked AI-generated disinformation undermining elections and democratic institutions.
Why does this matter?
AI is more than just another technological breakthrough—it’s a strategic driver of economic power and influence. The countries leading in AI today will play an important role in shaping the future of automation, digital economies and international regulatory frameworks.
AI’s global expansion is driven by several key motivations. It has the potential to massively boost productivity and creativity. It can create new business models and transform entire industries. Governments investing in AI aim to secure long-term economic advantages, particularly in sectors such as finance, health care and advanced manufacturing.
Meanwhile, AI is increasingly integrated into defense, cybersecurity and intelligence. Governments are exploring ways to use AI for strategic advantage, while also ensuring resilience against AI-enabled threats.
But as AI investment surges, it is increasingly important to ensure that the challenges the new technology will bring are not overlooked in the rush.
Risks of rapid AI investment
As AI advances, ethical issues become more pressing. AI-powered surveillance systems raise privacy concerns. Deepfake technology, meanwhile, which is capable of generating hyper-realistic video and audio, is already being used for disinformation. Without clear regulatory oversight, this could seriously undermine trust and security and threaten democratic institutions.
At the same time, we are already seeing inequality baked into AI development. Many AI-driven innovations cater to wealthy markets and corporations. Meanwhile, marginalized communities face barriers to accessing AI-enhanced education, health care and job opportunities—the latter was demonstrated as long ago as 2018 when Amazon reportedly withdrew a recruitment tool that was shown to discriminate against women.
Ensuring that AI development benefits society as a whole will require a strategic approach to skills, education and governance. I have conducted studies into how AI tools are being harnessed with a great deal of success in the UK and US and also in China. The research showed how AI capabilities can be combined with strategic agility to drive product and service innovation in many contexts.
But the AI race is not just about economic progress, it also has geopolitical implications. Restrictions on AI-related exports, particularly in semiconductor technology, highlight growing concerns over technological dependencies and national security. Without greater international cooperation, uncoordinated AI policies could lead to economic fragmentation, regulatory inconsistencies across borders and the inevitable risks those bring.
Although some nations are advocating for global AI agreements, these discussions remain in their early stages, so enforcement mechanisms remain limited.
The way forward
This will require multilateral governance, similar to global frameworks on cybersecurity and climate change. Existing discussions by the United Nations as well as the G7 and the Organization for Economic Cooperation and Development (OECD) need to incorporate stronger AI-specific enforcement mechanisms that guide development responsibly.
There are signs of progress. The G7’s Hiroshima AI Process has resulted in shared guiding principles and a voluntary code of conduct for advanced AI systems. The OECD’s AI Policy Observatory, meanwhile, is helping coordinate best practices across member states. But binding international enforcement mechanisms are still in their infancy.
Individual countries, meanwhile, need to develop flexible regulatory frameworks that balance innovation with accountability. The EU’s AI Act, the first major attempt to comprehensively regulate AI, classifies AI systems by risk and imposes obligations on developers accordingly.
This has included bans on certain high-risk applications, such as social scoring—which ranks individuals based on behavior and can lead to discrimination. It’s a step in the right direction, but broader cooperation is still needed to ensure coherent global AI standards.
An enforceable set of rules governing AI development is needed—and quickly. AI could pose more risks than opportunities if left unchecked.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Developments in AI need to be properly regulated as the world scrambles for advantage (2025, April 24)
retrieved 24 April 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Leave a comment