New developments in AI could spur a massive democratization of access to services and work opportunities, improving the lives of millions of people around the world and creating new commercial opportunities for businesses. Yet they also raise the specter of potential new social divides and biases, sparking a public backlash and regulatory risk for businesses. For the U.S. and other advanced economies, which are increasingly fractured along income, racial, gender, and regional lines, these questions of equality are taking on a new urgency. Will advances in AI usher in an era of greater inclusiveness, increased fairness, and widening access to healthcare, education, and other public services? Or will they instead lead to new inequalities, new biases, and new exclusions?
Three frontier developments stand out in terms of both their promised rewards and their potential risks to equality. These are human augmentation, sensory AI, and geographic AI.
Human Augmentation
Variously described as biohacking or Human 2.0, human augmentation technologies have the potential to enhance human performance for good or ill.
Some of the most promising developments aim to improve the lives of people with disabilities. AI-powered exoskeletons can enable disabled individuals or older workers to accomplish physical tasks that were previously impossible. Chinese startup CloudMinds has developed a smart helmet called Meta, which uses a combination of smart sensors, visual recognition, and AI to help visually-impaired people safely navigate urban environments. Using technology similar to autonomous driving, sensors beam data on location and obstacles to a central cloud system, analyze it, and then relay vocal directions and other information back to the user. The system could be used to read road signs and notices, or potentially even translate Braille notices printed in foreign languages.
For sign-language users, a major challenge is how to communicate with the majority of people who do not know sign language. A promising development here is the sign-language glove developed by researchers at Cornell University. Users wear a right-hand glove stitched with sensors that measure the orientation of the hand and flex of the fingers during signing. These electrical signals are then encoded as data and analyzed by an algorithm that learns to read the user’s signing patterns and convert these to spoken words. In trials, the system achieved 98% accuracy in translation.
Scientists have already shown how brain implants can help paralyzed individuals operate robotic arms and exoskeleton suits. Elon Musk’s NeuraLink aims to go one step further, implanting flexible hair-thin threads to connect the human brain to AI systems that can operate phones and computers. The MIT Media Lab is pioneering a voiceless communications technology — dubbed Alter Ego — that allows users to communicate with computers and AI systems without opening their mouths, offering hope to millions of people afflicted by speech disorders. Transcranial stimulation — an experimental technology still in its infancy — is being used by sports teams and students to build muscle memory and greater concentrative power.
Despite these tremendous breakthroughs, the potential for new biases and inequalities remains. Apart from the obvious concerns about privacy associated with invasive technologies, cognitive or physical data could be misused — for example in recruiting or promotion decisions, in the administration of justice, or in granting (or denying) access to public services. Moreover, access to basic digital technology remains a significant barrier, with almost half the world’s population still excluded from the internet.
The sociologist Christoph Lutz observes that traditionally disadvantaged citizens are similarly disadvantaged on the internet, for example by having limited access to technology, restricted use opportunities, and by lacking important digital skills. In fact, many fear that the affluent will be better able to afford costly performance-enhancing technology, perpetuating existing disparities in education and the job market. Educational performance may come to depend less and less on how hard you study in college, and more and more on what kind of technology you can afford. Yuval Harari, the author of Homo Deus, has argued that AI technologies could eventually splinter humanity into two classes he labels “the Gods and the Useless” — those who can avail themselves of performance-augmenting AI and those who cannot.
Sensory Imbalance
The human senses — sight, hearing, smell, touch, and taste — represent a rich territory for the next generation of AI technologies and applications.
Take our voices, for example. The pitch, tone, timbre, and vocabulary used can provide important clues to our physical and mental well-being. The journal Nature recently reported how voice analysis algorithms are being developed to spot signs of depression (where the frequency and amplitude of speech decline) and Alzheimer’s Disease (where sufferers use more pronouns than nouns as they forget common terms). Advances in digital olfaction — the use of digital technologies that mimic the sense of smell — may soon be used to detect cancer and other diseases before the symptoms become apparent. Given increasing concern around access to healthcare in the U.S. and other economies, these developments offer the potential for early, low-cost detection of major chronic diseases: imagine just talking into your iPhone for a daily check-up.
Yet, the potential for bias is also there: users’ data could be screened without their knowledge and could ultimately be used to cherry-pick lower-risk or healthier individuals for jobs, healthcare coverage, and life insurance, for example. The European Commission has warned that AI may perpetuate historical imbalances or inequality in society, particularly where there are data gaps along gender, racial, or ethnic lines. In healthcare, for example, disease symptoms often vary between males and females, creating the risk of bias or misdiagnosis in AI-based systems of disease detection and monitoring that are trained on gendered datasets. For example, while AI systems have been shown to be as accurate as dermatologists in detecting melanomas, these datasets are often not representative of the population at large with different skin types. The lack of representation of racial minorities in AI training data has been investigated by Joy Buolamwini and Timnit Gebru, who found that several major facial recognition datasets were “overwhelmingly composed of lighter-skinned subjects,” with significantly lower accuracy rates for females and darker-skinned subjects.
Geographic Tracking
Imagine being able to look at images of a city and identify patterns of inequality and urban deprivation.
This vision is now a step closer thanks to a team of scientists from Imperial College London, who developed an algorithm that uses Google Street View images of cities to identify patterns of inequality in incomes, quality of life, and health outcomes. I interviewed Dr. Esra Suel, an expert in transport planning who led the pilot project, who observed: “We wanted to understand how real people experience cities — their homes, neighborhoods, green spaces, environment, and access to urban services such as shops, schools, and sanitation. Yet, existing measures do not capture the complexity of their experiences in their entirety.” Dr. Suel see three major benefits as visual AI systems evolve in the future. “First, they can complement official statistics such as the census in providing timelier measures of inequality, so that governments can direct resources to areas based on changing needs. Second, they can uncover pockets of poverty that may be concealed by high average incomes — the poor neighborhood located side-by-side with a more plush city area, for example. Third, the use of visual AI could be a game changer for developing countries, which often lack the resources to collect official data on inequality.”
The element of speed becomes even more critical in tracking and controlling infectious diseases, which are a major source of health and educational inequality in the developing world. Canadian startup BlueDot used airport flight data and population grids to model the spread of the Zika virus from its origin in Brazil. More recently, BlueDot sounded an early alarm around the spread of the coronavirus in the Chinese city of Wuhan, using a combination of news reports, animal disease tracking, and airline ticketing data.
Yet this increased ability to digitally map and analyze our environs carries risks. One concern is that geographic AI systems could lead to a new era of “digital redlining” — a reprise of the practice of government-backed mortgage providers denying loans to residents of minority neighborhoods regardless of their creditworthiness, which emerged in the U.S. in the 1930s, on the justification that those loans were “high risk.” Digital red-lining could lead businesses to eschew lower-income areas, for example by denying access to insurance coverage or imposing higher premiums. Even worse, geographic algorithms could make it easier for unscrupulous operators to target areas and households with high degrees of a dependency, for example to gambling or alcohol, and to target them with predatory loans.
Moreover, the predominant use of such systems in poorer areas could itself be deemed unfair or discriminatory, to the extent that they target particular areas or socio-economic groups. To take one example, more and more governments are using AI systems in their welfare and criminal justice systems. In the Netherlands, a court recently ordered the government to stop using an AI-based welfare surveillance system to screen applications for fraud on the grounds that it violated human rights and was being used predominantly in poorer immigrant neighborhoods.
Delivering Dividends for Equality
How can these frontier AI technologies be harnessed as a force for greater equality while minimizing the potential for misuse and bias? While inequality is a complex problem with many dimensions, three actions can set policymakers and business leaders moving in the right direction.
Get the basics right.
The simple truth is that much of the world’s population, especially in poorer countries, stands to lose out from the benefits of AI for one reason: lack of access to basic digital infrastructure. And here the statistics make sobering reading: less than half the population in developing countries has access to the internet, a figure that falls to 19% for the very poorest countries. There is a growing gender imbalance in internet usage, with 58% of men globally using the internet compared to 48% of women. A first priority must be to accelerate the roll-out of broadband infrastructure, particularly in the developing world, which could benefit from low-cost AI applications in healthcare and education. Public-private partnerships, the use of low-cost sensor technology, and innovative pricing models can also help to increase access.
Spread the benefits.
To guard against the use of AI for cherry-picking profitable customers, or conversely, digital redlining, regulators can borrow from some of the tools of trade policy and utility regulation. One option would be a kind of “most-favored customer” rule, where operators would need to offer similarly advantageous terms to all within a defined group or area. Such requirements would provide reassurance to customers that they are not being treated inequitably. Another option, from utilities regulation, would be some kind of universal service fund where businesses collectively fund services in poorer areas in return for the right to provide profitable services elsewhere. Businesses can also look to new forms of social enterprise, working collaboratively with governments and private investors to provide low-cost services to groups at particular risk.
Select for unbiasedness.
Most real-world datasets are not statistically representative by definition — they represent the outcome of various societal and institutional biases. A healthcare database, for example, reflects a series of filters around which people get the disease, who gets treated, and whose data gets recorded in the database. So we need to correct for built-in biases at every turn. In using AI-based systems, a first step for businesses, governments, and regulators must be to carefully scrutinize the process by which their training datasets are created. Greater openness around the broad structure and parameters of datasets can help organizations spot gaps and biases, as well as provide extra reassurance around the integrity of such data.
Will AI prove to be the great leveler or a new divider? The answer lies in our own hands. By taking action now to address biases and risks, businesses and governments can start to make AI a true force for social progress and economic prosperity.