Watch all the Transform 2020 sessions on-demand here.
Facebook got a lot of criticism over the Cambridge Analytica breach, and Zuckerberg vowed to do better in protecting users’ privacy in a full-page ad. But this is not the first time political campaigns have used social media user data during elections — the only difference was that millions of users did not even know the platform was harvesting their data and using it to target them for political purposes.
The bigger problem is that what happened to Facebook was inevitable. Sure, Facebook as a closed system is especially harmful. A system that can see your current interactions, has control over the content it shows you, and can measure the results of those things is a perfect fit for human behavior optimization.
What I’m saying is that even if we did not have the Cambridge Analytica scandal, the fact would remain that social channels are harvesting our data. Take Twitter, for instance. You can easily see any likes and interactions people have had — that data is open to everyone. Use the Twitter API and you can automate its collection. Connect it to IBM Watson or some other enterprise service and you will instantly get access to thousands (if not millions) of records. And this data is not private by any means.
The ingenious idea is to build a psychological profile based on the “likes” of users, then learn who to target and how to target them. Once you have built this profile, you can use it any way you please.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The cycle does not need to be rooted in Facebook — one could build a profile from Twitter data and use that in Facebook ads. You only need the profile to train the AI, and once you’ve trained it, the technology can work its magic on any platform.
AI is getting more aggressive
As AI grows more intelligent, it will be able to read and analyze data from disparate sources. It will not need a feed of uniform data or dozens of operators to scan and extract the signal from the noise. For instance, there are AI technologies that can scan thousands of records in a matter of minutes and return results. This means that AI can scan websites, files, and documents and form a complete profile for us without breaking a single privacy law.
The information is out there, free for the public — it only becomes gold when a machine learning engine traverses all of them, collects the data in a single place, creates a profile based on it, and fills the gaps accordingly — all within minutes.
Many users felt manipulated by Facebook following the Cambridge Analytica scandal. This has led us to start questioning the ways the company acquired the data they used. However, soon companies like CA will have that data anyway, even without Facebook. We cannot even be sure that right this moment, the same thing is not happening again. Moreover, as I described above, companies can collect this information via completely legal means.
The problem is not Facebook. The problem is that we are not prepared for the threats that surround us.
The real threat
AI is most feared for its potential to either replace humans at work or annihilate them altogether. However, AI can’t really get creative — it can only repeat what humans do, though sometimes more efficiently. While it surely does a better job than many people in certain fields, leading to replacement worries, AI also creates new opportunities. Besides, automation attempts at major companies such as Tesla have proved that overdoing AI optimization is not practical — at least not yet.
The threat of AI taking our jobs or attacking humans isn’t as imminent as the threat of humans using the technology for nefarious purposes. It’s how we use AI that causes the real threat. For example, companies like Netflix and Facebook can use our psychological profiles to help us find new friends with similar interests or offer tailored recommendations for TV shows without issue. However, in the case of Cambridge Analytica, the company used these profiles to elicit a certain behavior from the targets without their knowledge, which is setting off alarms for good reason.
A more severe possibility for the technology involves companies using your content and connections to shift your ideas. For instance, if you publish content that contains ideas that the system wants to dissuade you from, it could share it only with people with opposite views, creating tons of negative reviews and the impression that nobody agrees with you. Likewise, if your piece contains issues the system wants you to hold onto or strengthen, it can share it only with like-minded people so you only receive positive feedback.
Taking this a step further, governments could potentially use this technology against their people. For instance, China’s censorship effectively creates a closed system that is totally vulnerable to these kinds of manipulations. Even security agencies like those revealed by Edward Snowden could control your traffic at the router level.
How to protect ourselves
AI will not go away. Our information is out there, and we cannot solely rely on regulations to protect us. Savvy individuals outpace regulations by constantly creating new ways to alter our behavior. You might take the blockchain route to conceal and stamp everything, but since not everyone is 100 percent on the blockchain, there will still be data leaks. This is why I believe in Alan Turing’s approach that only a machine can defeat another machine; thus, we need to arm and catch up with our own AI tools.
An AI assistant that protects the interests of its user could be a feasible solution. This AI would need to be transparent and decentralized so we could be certain it wouldn’t serve any other parties behind the scenes. Such AI could “break the loop.” For instance, it could detect patterns of behavior optimization and understand what a publication is trying to make you do, and warn against that. The technology could even alter the content or block parts of it to neutralize such attempts. In the case of channeled traffic, an AI assistant could be helpful by detecting such patterns and automatically sharing the content beyond a single platform, all while sending the results back to the user.
Much of what we thought about AI hasn’t happened, and a lot of things we did not think would happen have. In the end, what we are really up against is the humans behind the machines, rather than the machines themselves.
David Petersson is a developer and tech writer who contributes to Hacker Noon.