One day, artificially intelligent robots will replace human beings as the Earth’s dominant form of life, or so says Stephen Hawking (and Elon Musk). However, for all the impressive progress AIs and robots have made in recent years, the most urgent and real danger they pose to society lies elsewhere.
More specifically, the threat lies not with the possibility of AIs transcending the specific purposes they now serve (such as managing hedge funds or recruiting new employees) and rebelling against their owners. Rather, it resides more with the opposite scenario, with just how supremely efficient AIs are in acting on behalf of their masters.
In fact, given that robots and AIs have yet to show even the faintest glimmer of self-determination or independence, it’s clear they aren’t (yet) a “new form of life,” as Hawking called them. Rather, what they are is an extension of the groups that create and use them. They are elaborate tools that enhance an organization’s ability to perform certain activities.
And it’s because they are extensions of their operators that they’re a potential threat to society. Bots are engineered and employed by particular groups of people with particular values and interests. This means their actions inevitably come to reflect and advance these values and interests, which aren’t necessarily shared by all groups and individuals. Their superior efficiency and productivity mean they’ll give a distinct advantage to the people with enough resources to harness them, enabling these people to reshape the world in their image at the expense of those who don’t have such resources.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
To see how artificial intelligence could increase social, economic, and political inequality, you only have to look at recent examples of how AIs and machine learning algorithms are already exhibiting prejudice.
For instance, in May 2016 a ProPublica study revealed that reoffending assessment algorithms used in multiple U.S. states were biased against African Americans, inaccurately labeling them as “higher risk” 44.9 percent of the time, compared to 23.5 percent for whites.
In another example, a research paper published in Science in April 2017 discovered that, when made to learn a large number of word associations taken from the internet, machine learning tools acquired “human-like semantic biases” (i.e., stereotypes) regarding women and African Americans.
Such instances of prejudice come from how AI and robots learn only from the data and parameters they’re fed. More often than not, the data and parameters they’re fed come from the kind of people who generally design and utilize them: white, privileged males who stand on the higher rungs of the social hierarchy.
This is why AI facial recognition software has struggled to identify faces of color, and it’s a big part of the reason why AIs and robots function in ways that work more to advance the interests of their makers and owners over those of others.
On top of this, their superior efficiency and their penetration into 24.7 million jobs by 2027, for example, will mean they’ll stamp their makers’ marks onto most areas of life and work. They’ll make millions of working-class people unemployed, they’ll sway the rhythms of the home, and they’ll give the advanced nations able to adopt them a massive commercial advantage over their developing rivals. Consequently, the nations and classes of people that manufacture and own them will accumulate even more global influence than they already possess.
This is why the growing discourse on awarding robots rights (and even citizenship) is so troubling. Robots and AI are already protected by the property rights of their owners and cannot be randomly destroyed by vandals, for instance.
As such, it becomes clear that granting them “rights” can’t simply mean granting them the negative right to protection from destruction (since they already have that), but must be tantamount to granting them the positive right to pursue their ends and purposes without suffering interference (e.g., political opposition) from, say, people who are troubled by the idea of robots.
In other words, the granting of such rights would effectively equate to the granting of special, protected status to the aims, purposes, and values of their owners, who would find their ability to use AI and robots to serve their own ends enhanced even further.
And in closing, this mention of values touches on another problematic theme that often crops up in discussions of AI. From European Parliaments to research institutions, it’s regularly said we need to ensure that intelligent robots learn and uphold human values. Yet the question is: Just whose values exactly will be taught to and upheld by robots, and just who presumes to speak for all humanity?
Regardless of whether a truly universal set of values can be instilled into AI, such talk just goes to show that robots can’t help but act in ways that have ethical, social, and economic consequences in one direction or another, and that they therefore can’t help but act as agents for certain values.
And the worrying truth is that, seeing as how they’ll be made and controlled largely by a select class of corporations and industrialists, it’s most likely their superior abilities will give a distinct advantage — for better or for worse — to the values and interests of such a class.
Simon Chandler is a tech journalist who contributes to Wired and the Daily Dot.