Watch all the Transform 2020 sessions on-demand here.
On Tuesday, news broke that Microsoft refused to sell its facial recognition software to law enforcement in California and an unnamed country. The move led to some praise for the company for being consistent with its policy to oppose questionable human rights applications, but a broader examination of Microsoft’s actions in the past year indicates that the company has been saying one thing and doing another.
Microsoft’s mixed messages
Last week, the Financial Times reported that Microsoft Research Asia worked with a university associated with the Chinese military on facial recognition tech that is being used to monitor the nation’s population of Uighur Muslims. Up to 500,000 members of the group, primarily in western China, were monitored over the course of a month, according to a New York Times report.
Microsoft defended the work as helpful to advance the technology, but U.S. Senator Marco Rubio called the company complicit in human rights abuses.
Just weeks earlier, in an announcement endorsing the Commercial Facial Recognition Privacy Act, Microsoft president Brad Smith was quoted by Senator Roy Blunt’s office as saying that he believes in upholding “basic democratic freedoms.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Along the same lines, Microsoft CTO Kevin Scott asserted in January that facial recognition software shouldn’t be used as a tool for oppression.
It’s damn confusing to try to stitch together the message Microsoft has sent over the past year across the many expanses in which it exists, particularly when accounting for statements made by Smith. This story begins in part last summer when he insisted that Congress regulate facial recognition software to preserve freedom of expression and fundamental human rights.
“While we appreciate that some people today are calling for tech companies to make these decisions — and we recognize a clear need for our own exercise of responsibility, as discussed further below — we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic,” Smith wrote in a blog post. “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology.”
That assertion was followed by the introduction of six principles for facial recognition software usage last December, as well as Smith’s continued insistence for regulation in fear of a “commercial race to the bottom” by tech companies.
That’s how Microsoft presents in Washington D.C. and overseas, but the company has also sent conflicting messages in its home state of Washington.
Over the span of the past few months, Microsoft has publicly supported a Washington senate privacy bill that would require businesses get consent before using facial recognition software. At the same time, Microsoft attorneys have appeared at statehouse hearings to argue against HB 1654, another bill that would require a moratorium on the technology’s use until the state attorney general can certify that facial recognition systems are free of race or gender bias.
Microsoft’s legal counsel has argued that the third-party testing stipulated in the bill it supports should sufficiently encourage accountability, but that argument flies in the face of Microsoft’s principle that says facial recognition software should treat all people fairly.
Facial recognition software in society
What seems clear after the past month of politically tinged drama at Amazon, Google, and Microsoft is that the largest companies in AI aren’t afraid to engage in some ethics theater or ethics washing, sending signals that they can self-regulate rather than carrying out genuine oversight or reform.
Perhaps self-regulation is, as deep learning pioneer Yoshua Bengio put it, as easy as self-taxation.
What’s also clear is that Smith is correct in his assertion that facial recognition software’s emergence as something that can be accomplished in real time for live video highlights the question of how people around the world want this technology to be used in society.
According to analysis by FutureGrasp, a company working with the United Nations on technology issues, only 33 of 193 U.N. member states have created national AI plans.
This story will continue to play out as governments all over the world decide whether they believe practical applications of technologies like facial recognition software exist that can avoid overreach or mistreatment of minority populations, or whether, as the city of San Francisco said in its proposed ban of facial recognition software, this technology’s negatives outweigh its positives.
Just as people often mention The Terminator in reference to autonomous weaponry worst-case scenarios, Smith repeatedly invokes 1984 in reference to surveillance state fears. But it’s tough to reconcile how Microsoft is at once in favor of protecting human rights in California while being complicit in violations in China. Likewise, it’s hard to square how Microsoft insists that facial recognition systems be fair but opposes a moratorium that makes fairness an obligation before deployment.
However societies choose to go forward to work out how facial recognition systems will be applied in the years ahead, companies like Microsoft will be at the table, and they should do their part to guard against Orwellian scenarios in their words and actions if they want to retain the trust of citizens and lawmakers.