cover image
cover image

Illustration by Sidra Dahhan

The Surge of Artificial Intelligence Is a Double Edged Sword for the Developing World

There is no doubt that adoption of AI has the potential to dramatically improve the standard of living and save lives, if done correctly. If we’re not careful, though, it may also end up perpetuating existing inequalities.

May 8, 2023

Within weeks of GPT3's release, it gained 100 million monthly users, many of whom have no doubt already experienced its dark side — from insults and threats to disinformation and a demonstrated ability to write malicious code. About a month ago, OpenAI, ChatGPT’s parent company, introduced GPT4, stunning everyone with its capabilities. It is a definite upgrade from GPT-3, as it can file lawsuits, analyze dating profiles and calculate compatibility, and most significantly, build websites from scratch, among other things.
The chatbots that have become the main topic of conversation among academics, office administrators, and computer scientists alike are just the tip of the iceberg. Artificial Intelligence that creates text, speech, art, and videos is advancing rapidly, with far-reaching consequences for government, commerce, and civic life. It is not surprising that there is a serious influx of capital in this sector as both governments and companies are investing in startups for the development and application of the latest machine learning tools. These new applications will combine historical data with machine learning, natural language processing and deep learning to determine the likelihood of future events. For example, by analyzing crime, health, historical weather, financial data or traffic data, governments can predict probabilities of crimes, outbreaks of diseases, likelihood of natural disasters, stock market crashes or even traffic congestions.
Crucially, the adoption of new natural language processing and generative artificial intelligence will not be limited to rich countries and companies like Google, Meta and Microsoft that spearheaded their creation. These technologies are already spreading across low and middle-income environments, where predictive analytics for everything from reducing urban inequality to addressing food security offer significant potential for improving efficiency and achieving social and economic benefits.
AI-powered predictive analytics can help low- and middle-income governments in urban areas to address inequality by identifying regions with higher levels of deprivation and inform resource allocation for infrastructure development. It can assist in predicting population growth and demand for services, allowing for more efficient urban planning. It can also analyze agricultural data, weather patterns, and soil conditions to predict crop yields, optimize resource allocation, and improve farming practices. This can help small-scale farmers in low- and middle-income countries enhance productivity, reduce crop loss, and ensure food security for their communities.
The problem is that not enough attention has been paid to the potential negative externalities and unintended consequences of these technologies. The most obvious risk is that predictive tools, as powerful as ever, will strengthen the monitoring and surveillance capacities of authoritarian regimes.
One often-cited example is China's “social credit system,” which uses information about debt, criminal convictions, online behavior and other data to rate every person in the country. Those scores can then determine whether someone can get a loan, attend a good school, travel by train or plane, etc. Although China's system is seen as a tool to improve transparency, its parallel role is an instrument of social control. In terms of transparency, the social credit system, for example, incorporates data from various sources, including law enforcement agencies, to assess the trustworthiness of individuals. This is intended to enhance public safety and security by identifying and monitoring individuals with a history of criminal activity or non-compliant behavior.
Even when seemingly well-intentioned democratic governments, companies focused on social impact, and progressive non-profit organizations use predictive tools, the results are not always optimal. Flaws in the design of underlying algorithms and biased datasets can lead to privacy violations and identity-based discrimination.
This has already become an obvious problem in the criminal justice system, where predictive analytics tools regularly actualize and perpetuate racial and socioeconomic disparities. For example, an artificial intelligence-based system designed to help US judges assess the likelihood of recidivism wrongly determined that African-American defendants were at a much higher risk of reoffending than whites.
Concerns are also growing about how AI could deepen workplace inequalities. Until now, predictive algorithms have increased efficiency and profits in ways that benefit managers and shareholders. In AI-supported companies these people would artificially create the managerial hierarchy but at the expense of ordinary workers, who would actually be managed not by other humans but by an algorithm, especially in labor markets where work contracts and freelance engagements are more common than permanent contracts.
In all these examples, AI systems are like a distorted mirror of society, reflecting and exacerbating our prejudices and inequalities. It is important to note that access to, and the ability to effectively use and even create, digital content, are reflected by negative socio-cultural norms such as gendered gaps in educational and employment possibilities. As technology researcher Nanjira Sambuli has observed, digitization tends to exacerbate rather than improve pre-existing political, social and economic problems.
Enthusiasm for using predictive tools must be counterbalanced by an informed and ethical consideration of their intended and unintended effects. It comes down to the developers — if they were aware of the potential negative consequences and took steps to mitigate them but the harm still occurred, it might be seen as unintended. However, if developers fail to recognize or adequately address foreseeable risks, the harm can be considered more intended. Where the effects of powerful algorithms are disputed or unknown, according to the precautionary principle, they should not be applied.
We must not allow artificial intelligence to become another area where those in decision-making positions will seek forgiveness instead of permission. That's why the United Nations High Commissioner for Human Rights and other organizations have called for moratoriums on the adoption of AI systems until ethical and human rights frameworks are updated to take into account their potential harm.
To create appropriate frameworks, consensus from both governments and private organizations will be needed on the underlying principles that should be considered in the design and use of predictive AI tools. Fortunately, the AI race has given rise to a parallel wave of ethics research, initiatives, such as the Ethical AI initiative, institutes, such as the Global Ethics institute and networks. And while civil society has taken the lead, intergovernmental entities such as the Organization for Economic Co-operation and Development and United Nations Educational, Scientific and Cultural Organization have also gotten involved.
The UN has been working on building universal standards for ethical AI as early as 2021, if not sooner. In addition, the European Union has proposed an AI Act — the first such effort by a major regulator — that would prevent certain uses by the state (such as those resembling China's social credit system) and subject other high-risk applications to special requirements and supervision.
To date, this debate has been largely concentrated in North America and Western Europe. However, lower and middle-income countries have their own basic needs, concerns and social inequalities to address. There is much research showing that technologies developed by and for markets in advanced economies are often inappropriate for less developed economies. Given how lower income countries need to firstly focus on regulating labor rights and assuring that every worker gets the appropriate amount of benefits and care, these technologies are more than likely to exacerbate poor working conditions if used as another method of exploitation.
If new AI tools are simply imported and put into widespread use before the necessary governance structures are in place, they could easily do more harm than good. All these questions must be considered if we are to devise truly universal principles for managing artificial intelligence.
Recognizing these gaps, the Igarape Institute and New America recently formed a new Global Working Group on Predictive Analytics Tools for Security and Development. The working group will include digital rights advocates, public sector partners, technology entrepreneurs and social scientists from the Americas, Africa, Asia and Europe, with the aim of defining the first principles for the use of predictive technologies in public safety and sustainable development of the countries of the third world.
The formulation of these principles and standards is only the first step. A bigger challenge will be organizing the international, national and intra-national cooperation and coordination needed for their implementation in law and practice. In the global rush to develop and deploy new AI predictive tools, harm prevention frameworks are essential to ensure a secure, prosperous, sustainable human-centered future.
Stefan Mitikj is Managing Editor. Email him at feedback@thegazelle.org
gazelle logo