AI is often presented as the next peak of human innovation, owing to its potential to revolutionize industries, transform economies, and improve lives. But will AI truly benefit everyone, or will it deepen existing divides?
The answer depends on how the technology is developed, deployed, and governed. Without purposeful interventions, AI’s potential will be harnessed for narrow gains by those who prioritize profits over people.
Encouragingly, the cost of AI development is beginning to decline. While OpenAI’s GPT-4 cost $100 million to train, the Chinese startup DeepSeek’s comparable model apparently cost a fraction of that.
This trend has promising implications for developing countries, which generally lack the massive financial resources that earlier AI innovations required but could soon be able to access and leverage these technologies more affordably. The choices we make today will determine whether AI becomes an instrument of inclusion or exclusion.
To ensure that AI serves humanity, we need to focus on incentives. AI development today is largely dictated by market forces, with an excessive focus on automation and monetizing personal data.
Then there is the question of safety and ethical use. These issues also must be addressed globally. Without robust ethical frameworks, AI can be – and already has been – used for harmful purposes, from mass surveillance to the spread of misinformation.
The few countries spearheading AI technologies are investing billions of dollars in labor-replacing applications that will exacerbate inequality. Making matters worse, government subsidies frequently focus on technical merits, which often target efficiency, without sufficient consideration of their direct and indirect societal impact.
Where jobs disappear, economic, social, and political instability tend to follow. Yet public funding continues to flow toward automation. Governments must realign incentives to encourage AI that serves social needs, such as enhancing education, improving health outcomes, and tackling climate challenges. AI should empower, not replace, human workers.
Population aging is a major challenge in some countries. Although household robots may help address some of the problems of an aging population, the frontier of current development focuses on priorities such as dynamic performance (running, jumping, or obstacle avoidance) in outdoor environments, rather than functions centering on safety and practicality, daily living assistance, or chronic disease management.
The task cannot be left to venture capital alone, which funneled $131.5 billion into startups in 2024, largely chasing overhyped and speculative technologies like artificial general intelligence.
Narrower-purpose models can advance medical diagnostics, assist radiologists, predict natural disasters, and much more. Redirecting investments toward solutions that directly benefit society is essential to keeping AI development aligned with collective progress, rather than shareholder value.
It is also necessary to bridge the divide between developed and developing economies. AI’s transformative potential remains largely untapped in low- and middle-income countries, where inadequate infrastructure, limited skills, and resource constraints hinder adoption. Left unaddressed, this technological divide will only widen global inequalities.
Consider what AI could do just for health care. It could broaden access to personalized medicine, giving patients in resource-limited settings tailored treatments with greater efficacy and fewer adverse effects.
It could assist in diagnosis, by helping doctors detect diseases earlier and more accurately. And it could improve medical education, using adaptive learning and real-time feedback to train medical professionals in underserved areas.
More broadly, AI-powered adaptive learning systems are already customizing educational content to meet individual needs and bridge knowledge gaps. AI tutoring systems offer personalized instruction that increases engagement and improves outcomes.
By making it far easier to learn a new language and acquire new skills, technology could drive a massive expansion of economic opportunities, particularly for marginalized communities.
Nor are the uses confined to health care and education. The University of Oxford’s Inclusive Digital Model (IDMODEL) demonstrates that equipping marginalized groups – especially women and young people – with digital skills allows them to participate in the global digital economy, reducing income disparities.
But global cooperation is crucial to unlock these benefits. AI must be approached collectively, such as through South-South initiatives to create solutions tailored to developing countries’ circumstances and needs.
By fostering partnerships and knowledge-sharing, lower- and middle-income countries can bridge the technological divide and ensure that AI serves a broad range of constituencies beyond the dominant players.
The decisions made today will determine whether AI becomes a bridge or a chasm between the world’s haves and have-nots. International collaboration, ethical governance, and public pressure can ensure that we make the right ones.
Then there is the question of safety and ethical use. These issues also must be addressed globally. Without robust ethical frameworks, AI can be – and already has been – used for harmful purposes, from mass surveillance to the spread of misinformation.
The international community will need to agree on shared principles to ensure that AI is used consistently and responsibly.
The United Nations – through inclusive platforms like the Commission on Science and Technology for Development – can help shape global regulations.
The top priorities should be transparency (ensuring that AI decision-making is discernible and explainable); data sovereignty (protecting individuals and countries’ control over their own data); harm prevention (prohibiting applications that undermine human rights); and equitable access.
Multilateral initiatives to develop digital infrastructure and skills can help to ensure that no country is left behind.
This is not only an issue for policymakers and the private sector. Throughout history, transformative change has often started from below. Women’s suffrage, the civil-rights movement, and climate activism all began with grassroots efforts that grew into powerful forces for change.
A similar movement is needed to steer AI in the right direction. Activists can highlight the risks of unregulated AI and apply pressure on governments and corporations to put human-centered innovation first.
AI’s social, economic, and political effects will not naturally bend toward inclusion or equity. Governments must steer incentives toward innovation that augments human potential. Global organizations must establish ethical frameworks to safeguard human rights and data sovereignty. And civil society must hold political and business leaders accountable.
The decisions made today will determine whether AI becomes a bridge or a chasm between the world’s haves and have-nots. International collaboration, ethical governance, and public pressure can ensure that we make the right ones.
(Shamika Sirimanne is Senior Advisor to the Secretary-General of UN Trade and Development. Xiaolan Fu is Professor of Technology and International Development at the University of Oxford)
Copyright: Project Syndicate, 2025