Watch all the Transform 2020 sessions on-demand here.
At the Movethedial Global Summit in Toronto yesterday, I listened intently to a talk titled “No polite fictions: What AI reveals about humanity.” Kathryn Hume, Borealis AI’s director of product, listed a bunch of AI and algorithmic failures — we’ve seen plenty of that. But it was how Hume described algorithms that really stood out to me.
“Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way,” Hume said. “They don’t permit polite fictions like those that we often sustain our society with.”
I really like this analogy. It’s probably the best one I’ve heard so far, because it doesn’t end there. Later in her talk, Hume took it further, after discussing an algorithm biased against black people used to predict future criminals in the U.S.
“These systems don’t permit polite fictions,” Hume said. “They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Reflections and refractions
If an algorithm is designed poorly or — as almost anyone in AI will tell you nowadays — if your data is inherently biased, the result will be too. Chances are you’ve heard this so often it’s been hammered into your brain.
The convex mirror analogy is telling you more than just to get better data. The thing about a mirror is you can look at it. You can see a reflection. And a convex mirror is distorted: The reflected image gets larger as the object approaches. The main part that the mirror is reflecting takes up most of the mirror.
Take this tweet storm that went viral this week:
The @AppleCard is such a fucking sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work.
— DHH (@dhh) November 7, 2019
Yes, the data, algorithm, and app appear flawed. And Apple and Goldman Sachs representatives don’t know why.
So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.
— DHH (@dhh) November 8, 2019
Clearly something is going on. Apple and Goldman Sachs are investigating. So is the New York State Department of Financial Services.
Whatever the bias ends up being, I think we can all agree that a credit limit 20 times larger for one partner over another is ridiculous. Maybe they’ll fix the algorithm. But there are bigger questions we need to ask once the investigations are complete. Would a human have assigned a smaller multiple? Would it have been warranted? Why?
So you’ve designed an algorithm and there is some sort of problematic bias in your community, in your business, in your data set. You might realize that your algorithm is giving you problematic results. If you zoom out, however, you’ll realize that the algorithm isn’t the problem. It is reflecting and refracting the problem. From there, figure out what you need to fix in not just your data set and your algorithm, but also your business and your community.
ProBeat is a column in which Emil rants about whatever crosses him that week.