While artificial intelligence for self-driving cars and virtual assistants gets a lot of attention, the past few months have seen a wave of AI advancements focused on the tasks of analysts. “Analyst” is a ubiquitous role, found in every industry that touches data. Analysts use the measurements we take of our world and try to answer relevant business questions — a role that is critical to gaining value from data. Yet AI appears to be increasingly encroaching on the role of analysts.
This month Apple acquired Lattice.io for $200 million to automatically turn unstructured data into structured data — a task that is normally in the realm of analysts. And a U.S.-based startup called Lapetus is looking to displace insurance risk analysts with artificial intelligence that the company says is more accurate at predicting life expectancy than traditional methods. These advancements and others like them beg the question: Will AI replace analysts?
The short answer is no. But maybe not for the reasons you suspect.
More human than human
When most people think of artificial intelligence, they think of a coldly rational decision maker, lacking in emotion — like Data, the fictional android from Star Trek. That may have been an accurate description in the early days of AI, when programmers wrote custom rule engines to respond to certain scenarios. But as AI and machine learning have progressed, algorithms have become incredibly good at pattern recognition, and have started to act more biologically — more like instincts based on experience than decisions based on logic.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
In Daniel Kahneman’s Thinking, Fast and Slow, the author describes the two systems of the human brain. System 1 is automatic, and System 2 is the conscious, logical system. If the automatic system in the brain is the one producing our emotions — our automatic response to things that might harm us (fear) or that might bring us good things (joy) — then AI is becoming much more like the emotional system than the rational one. In fact, recent advances in reinforcement learning, where an AI receives positive and negative signals in response to actions and develops its reactions over time, are already remarkably similar to the way our emotional responses are programmed by past experiences. And given that there are systematic errors in the automatic function of our thinking, as identified by Kahneman in his Nobel Prize-winning research, AI may surpass us in the quality of a “gut reaction” before it improves upon our logic.
With this shift toward System 1, some of the tasks that were once considered uniquely human are now within the reach of advanced AI. Algorithms are so adept at pattern recognition that AI can judge emotions and has learned to spot fear and joy in human faces. AI has written poetry and composed music, as in Ji-Sung Kim’s deepjazz project. All of this shows that the line separating AI from human intelligence isn’t quite where most of us thought it was.
The human touch
Yes, AI is advancing at an incredible pace and doing things that were once thought to be within the sole territory of humans, but there are still clearly areas where algorithms need humans.
While AI is a master of pattern recognition, algorithms can only operate on parts of the world that humans can precisely describe to it. A Go board, for example, is a closed environment; even though the number of potential combinations is mind-boggling, we can easily describe both the state of the board and the objective of the game in a few bytes of data and code.
The work of an analyst, however, does not just involve conducting data analysis within closed environments. The analysis must be applied to the outside world where there is much more context influencing the interpretation. For example, while AI connected to sensors might be able to analyze the soil on a plot of land and optimize yield more efficiently than a human, it doesn’t know what impact the soil conditions have on the flavor of the resulting crop. As AI becomes better at closed analysis, humans are no less valuable for applying that analysis to the world at large. The final objective of data analysis is always a human one. Whether the analysis is used to guide the creation of a product or to inform decisions, the ultimate consumer is human.
Understanding what it means to be human and caring about the human experience are intrinsically linked to the analysis process. It’s unlikely that an algorithm is going to learn to understand humans anytime soon since, until we have better brain-computer interfaces, it’s very difficult to describe the contents of our minds to a computer.
What this means is that our humanity is still very much an asset. While machine learning is instrumental in making the analysis process more efficient, algorithms cannot choose the human goals — that job requires a level of empathy that is out of the grasp of AI.
As Kim Scott put in her in her recent new book on leadership, Radical Candor, when talking about management, “Your humanity is an asset to your effectiveness, not a liability.”
Rise of the machine manager
The future for analysts is much less dystopian than the headlines suggest. The advancements in AI look a lot like having efficient assistants rather than replacements. Like a manager, every human will have a task force of AI, pattern matching and conducting closed environment analysis. The job of analysts will be to point the AI to the right questions to be analyzed and to decide how to apply that analysis to problems in the real world. As long as the ultimate consumer of analytics is a human, human analysts aren’t going anywhere.
David Crawford is the director of software engineering at Alation, a data catalog company.