As we create increasingly smarter machines, we’ve had to rewrite convictions we’ve held since the industrial age began. Data has become faster and more layered as a result of machines’ incredible complexity. Once that complexity passed the threshold of our understanding, a new chapter in our relationship with tools began.
A lot of what we have been doing with technology is rooted in the early days of tooling — the welding of hammers, the congregation around farriers, and the convening around radio sets. The tools themselves have changed, but the mental models we use to interact with them have remained relatively constant. Tools have simply become faster, stronger, and wireless.
A drill is a stronger hammer, a train is a faster car, and a cellphone is a wireless version of a landline. This all changed with the introduction of the Electronic Numerical Integrator and Computer (ENIAC), the first general purpose computing machine.
Technological evolution
ENIAC was installed at the University of Pennsylvania in 1946 and was the first machine able to perform multiple tasks. As such, the human tasked with operating it also had to switch their respective mental models.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
This was fine as long as we used single dimension computing in a tidy, linear, and cognitively manageable way — which is exactly what Model View Controller (MVC) is. MVC is a server architecture that was invented in the late 1970s by Alan Kay, Trygve Reenskaug, and Adele Goldberg at Xerox Parc. They invented the technology while they were working on Dynabook and Smalltalk.
MVC is based on the idea of a stationary database, a proprietary interface point, and sets of controllers that connect the two. The catch is, data can only move in one direction at a time. Whether you’re sending a tweet, fetching a query, or reading an article, data moves in one direction: either in the direction of the user or in the direction of the database, but not both at the same time.
A shift in attitude toward technology
Enter complexity. With the overgrowing taxonomy of tools, better sensors, and nested data technologies (deep learning, back propagation, neural nets, and more), we are able to leverage statistical computing in new and exciting ways. But there is one big caveat — we can’t fully understand what is going on. That is exactly the reason why bias is such a big issue in algorithms. We can reason what the source of a bias is, but we can’t immediately fix it.
The long tail nature of algorithms makes them fundamentally different to run than assembly lines. We can no longer take a position, correct our model, or refine data sources. Once we’ve made a change, we need to be patient and let it ripple downstream into a model.
This complexity, paired with new technical abilities, is bringing out interesting sets of biases and beliefs in humans. In particular, the epistemological (observer-specific) view of tools, which is often detached from their ontological (objective) properties. It’s interesting to see how some innovators in the technology space distance themselves from everyday use of tools and instead write a mental narrative that places tools above or below the trajectory of the present.
Techno-religion vs. technophobia
Techno-religion holds the view that all technology is good. Thinking this way limits your ability to have an open discussion about improvements and meaningful design, however. Real examples that fall into this trend would be the personifying of algorithms, false promises of all-doing AI assistants, and looking forward to the reign of artificial general intelligence (AGI).
Technophobia is the opposing view that assumes that anything we don’t understand (and that has the potential for power) will destroy us. This is where we forget that all data technologies — machine learning, deep learning, and everything currently branded as AI — are nothing more than tools. They’re simply data hammers running on fast computers, with a lot more data points than ever before.
A recent, ongoing exchange between Elon Musk and Mark Zuckerberg comes to mind as we discuss the differences between techno-religion and technophobia. Zuckerberg serves as the poster child of techno-religion. He is a successful founder who owes a lot to the internet. He is also someone culturally positioned in what Silicon Valley stands for — creating technology for its sake, and waiting for humans to follow.
Musk’s view of technology is slightly more complex as he does seem to waver between the opposing views (a pattern that in itself lends to binary views). Musk was recently cited saying that “AI is a fundamental existential risk for human civilization.” While this is a valid point of view, he does not make a sound argument. The reasons for the difference between brains and machine are wide and varied.
What about techno-sobriety?
Interestingly, the two camps intersect, especially around their views of AGI. Where techno-religion welcomes the brain as a computer problem, technophobia fears domination. This taxonomy seems categorical in a nonconstructive way. Consider the inherent mental bias or character traits those thinkers carry. If you believe an algorithm is intelligent, there is an interesting reflection on your own mental biases to be had.
Ben Goertzel, chairman of the Artificial General Intelligence Society, points to the Coffee Test as a good definition for AGI. Go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.
“This set of tasks that is seemingly easy for almost any adult to perform is currently insanely difficult for a computer. Creating AGI is a dramatically harder task than creating ANI. By most estimates, we are still more than two decades away from developing such AI capabilities, if ever,” authors Malcolm Frank, Paul Roehrig, and Ben Pring wrote in their book, What to Do When Machines Do Everything.
In the meantime, I wonder what we can do to better understand our tools and write new mental models of usability. After all, techno-sobriety is the only path that leads these technologies into the hands of our customers. I don’t need a robot that can make my coffee, program a website, and cook me dinner — I need tools that operate more closely to the way I think. I need tools of hyper contextualization, tools that understand my ever-changing intelligence rather than trying to mimic it.
As long as we operate without cognitive conviction, we’re doing our tools, intelligence, and businesses a disservice. It is only a sober view and an enabled narrative that will let us make sense of and leverage our instantiated uniqueness, legacy knowledge, and user needs.
Nitzan Hermon is a principal at Studio VV6, a communication studio and technology consulting company.