Watch all the Transform 2020 sessions on-demand here.
Predictive, data-driven software is becoming ubiquitous, and as it does so, our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.
Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings: what they prefer, what they react to, where they go, who they’ll flirt with, and whether they’re likely to pay back a loan or commit a crime.
Quite simply, we are coming to believe that machines know us better than we can know ourselves.
Perhaps the most jarring example of this new reality comes in the form of emotion-tracking AI. These systems claim to be able to read our moods, emotions, and personality traits by analyzing the micro-movements of our faces. According to practitioners like Human, such systems can make unbiased assignations about people in a way that bypasses the highly flawed cognitive biases of mere mortals.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Unsurprisingly, recruiters who are keen to circumvent human prejudices with regards to factors like race and gender are slowly utilizing this software. But is it really the case that smart AI like this can be ethically neutral?
Outside of popular concerns about data privacy, here are three reasons we should be cautious about the use of emotion-tracking AI in recruitment.
1. Humans are complex
Humans are vastly complex, and yet the very nature of AI systems is that they seek to simplify complexity into easily digestible chunks of information. Swathes of data are often crudely categorized, and these categories are then used as the basis for further extrapolation. A glance, a click, an address, a purchase — they all become proxies for something else: who we are, what we earn, how we dress, which cereal we prefer.
In the case of emotion-tracking software, small facial cues like frowns and smiles are ultimately taken to signify something more profound about the way we behave generally, like whether we’re honest or passionate.” And yet, however strong the correlation is between, say a frown and confusion, we must also remember the golden rule that correlation does not imply causation. Just because these two occurrences often go hand-in-hand does not make it a provable fact that confusion causes all frowns, or that either one of these factors automatically implies the presence of the other.
Even Paul Ekman, the co-developer of the Facial Action Coding System (FACS) used to train these intelligent algorithms, admits that “no one has ever published research that shows automated systems are accurate.”
These machines are supplying evidence, but not proof, of certain personality traits. And their findings come from plowing through extensive, but not exhaustive, databases of human expression. Currently, AI tracking software isn’t part of many hiring managers’ toolkits, but we should be wary of its robustness if its influence grows.
2. Emotions don’t necessarily outweigh ability
Even if we could unassailably prove the reliability of an AI emotion tracker, other important considerations remain. People seek employment for a variety of reasons. Most individuals simply want to feed their families, pay their rent, and perhaps take a vacation once in a while. Not every role published on the job market is likely to elicit emotions like passion, curiosity, or a meaningful level of enthusiasm. And yet it feels like this alone shouldn’t exclude a decent, capable candidate from the hiring process or diminish their chance of success.
It’s reasonable to assert that not every eager applicant is right for the role, and not every potentially excellent employee wants to lead a cheer about the job or the company they hope to be hired by. But Loren Larsen, the chief technology officer at HireVue — a company that uses this AI to help clients like Unilever — told the Financial Times that emotion tracking could actively privilege the “right” (i.e. positive, enthusiastic) emotion over historical data-points like qualifications and experience.
Larsen says, “[Recruiters] get to spend time with the best people instead of those who got through the resume screening. You don’t knock out the right person because they went to the wrong school.”
In many ways, it’s easy to view this as a positive. The technology could cut off damaging establishment biases and “old school tie” mentalities at their source, but it also appears to increase the opportunity to deselect other candidates on the seemingly superficial basis of not making the right face at the right time (to put it crudely). If the unenthusiastic are doomed to remain unemployed, it feels like we are replacing one pernicious feedback loop with another.
Making a decision that is not a wrong decision does not always mean making the right decision. It can just mean making a different wrong decision.
3. The technology could change human behavior
Finally, one of the main objections to the idea that we must all have had a top education to succeed is that it forces a kind of homogenization, while at the same time giving priority to members of society who are better able to attain certain standards (usually due to unearned privilege). By insisting on strict criteria for background and experience, employers can obstruct many good applicants. Increasingly, there’s a consensus that companies need to look beyond the cookie-cutter mold to broader signs of promise.
And yet while emotion tracking avoids forcing us to be Harvard-educated overachievers in order to succeed, might it not just encourage us to be some other way — a way that changes our behaviors and how we express ourselves as humans?
No longer will firms simply dictate our university major or how many internships we must suffer, they could also predetermine the ways in which we exist in the world. This could cause one of two things to happen: 1) People who naturally exude honesty, dedication, and enthusiasm via their facial movements bounce up to the top of the list, perhaps bypassing more capable candidates; and 2) Those who continually fail to find a job decide to experiment with the way they present themselves until they hit upon a facade that appeals to larger corporations.
What’s wrong with that, you may ask? Well arguably, this hands large firms the power to dictate the parameters of a mass behavioral homogenization. Eccentrics, shy folk, and those who naturally suppress emotional responses could find themselves factored out by an algorithm. If you don’t behave in a way that aligns with a new, desirable norm, then you can shape up or ship out.
What’s more, over time we might not simply see these people edited from the hiring process, but also vanishing from society more broadly.
There are inevitable foils to some of my objections. The vastness of the databases held by companies like Affectiva means that the range of facial movements monitored can be pretty broad and cover a span of cultural (and other) differences. But unless the system has surveyed a database comprising the entire human race, it is still dependent upon comparatively narrow sets of examples. This is a problem if you unknowingly express yourself in a way that is atypical.
Lastly, when considering new, seemingly neutral mechanisms like this, it is always important to remember that no system is totally impartial. Whether bias slips in via a human programmer or the balance of data, it can sit quietly festering in the background, damaging opportunities for a minority of people.
And even where damaging biases are removed, there are still reasons to be cautious about ceding judgment to this kind of AI. We must be sure that the path we’re following leads somewhere we wish to go.
Whatever the future holds for emotion tracking, we should not simply accept its results as fair or fact. We must question, critique, and — when needed — deploy our own human judgment and semantic knowledge. Sometimes machines move quickly and “talk a good talk,” but, at least at this stage, they still do not truly understand the human condition. We shouldn’t allow this speed, convenience, and slick AI marketing to convince us otherwise.
This story originally appeared on YouTheData.com. Copyright 2018.
Fiona J McEvoy is a tech ethics researcher and founder of YouTheData.com.