Recurrent Intelligence
Artificial Intelligence is in vogue, these days. Whether it's Elon Musk tweeting ominously about the looming dangers of AI, or the daily gaggle of CNBC analysts discussing how AI will upend capitalism as we know it - you can hardly turn on a screen without seeing some reference to the impending, machine-god future. It turns out that we've been here before, in some sense.
In What The Dormouse Said, John Markoff traces the early history of personal computing, and its curious intertwining with 60s counterculture. Among the book's central characters is Douglas Engelbart, a computer scientist obsessed with what would come to be known as "Intelligence Augmentation". The transistor was barely on the scene when Engelbart began writing about a humanity augmented by computing. He believed that both the complexity and urgency of the problems facing the average worker were increasing at an exponential rate - and it was therefore critical to develop fundamentally new tools for the worker to wield.
At the time, this conviction wasn't widely shared by Engelbart's peers. Developing computing tools that could amplify human abilities was seen as interesting engineering fodder; as a serious research focus, it was considered shortsighted. Buoyed by the incredible advances made during the prior decade, artificial intelligence was the preeminent focus in computing research. Computers had demonstrated the ability to solve algebraic word problems and prove geometric theorems, among myriad other tasks; wasn't it simply a matter of time before they could emulate complex aspects of human cognition?
Engelbart had drawn significant inspiration from the writings of Vannevar Bush, the head of the U.S. Office of Scientific Research and Development during World War II. Beyond his operational leadership during the war, Bush became famous for his instrumental role in establishing the National Science Foundation, and his musings on the future of science. Having overseen the Manhattan Project and other wartime efforts, he grew increasingly wary of a future where science was pursued primarily for destructive purposes, rather than discovery. Bush believed that avoiding such a future was contingent on humanity having a strong collective memory, and seamless access to the knowledge accumulated by prior generations.
In his most famous piece, As We May Think, Bush conceived of the "Memex", a personal device that could hold vast quantities of auditory and visual information. He saw the pervasive usage of Memex-like devices as a necessary component of a functional collective memory. Engelbart, like Bush, believed that the salient idea wasn't the ability to simply store and retrieve raw information; it was the ability to leverage relational and contextual data, which captured the hypotheses and logical pathways explored by others. Engelbart extended this vision, imagining computing tools that would allow for both asynchronous and real-time communication with colleagues, atop the shared pool of information.
The computing community's focus on artificial intelligence throughout the late 50s and early 60s meant that Engelbart, with his fixation on intelligence augmentation, struggled to realize his vision. Most of the relevant research dollars were flowing to rapidly growing AI labs across the country, within institutions like MIT and Stanford. Engelbart worked for many years at the Stanford Research Institute (the non-AI lab), spending his days developing magnetic devices and electronic miniaturization, and his nights distilling his dreams into proposals. In 1963, his persistence paid off; DARPA decided to fund Engelbart's elaborate vision, leading to the creation of the Augmentation Research Center (ARC).
The following years saw an explosion in creativity from the researchers at ARC, who produced early versions of the bitmapped screen, the mouse, hypertext, and more. All of these prototypes were integrated pieces of the oN-Line System (NLS), a landmark attempt at a cohesive vision of intelligence augmentation. In 1968, Engelbart's team showcased NLS in a session that's now known as the "Mother of All Demos". The presentation is charmingly understated; Engelbart quietly drives through demonstrations of the mouse, collaborative document editing, video conferencing, and other capabilities that would become ubiquitous in the digital age.
From there, the future we know unfolded. Xerox PARC built upon Engelbart's concepts, producing the Alto workstation - a PC prototype that sported a robust graphical user interface. Steve Jobs would cite the Alto as one of Apple's seminal influences, prior to the creation of the Macintosh. The Macintosh would become the first widely-available PC with a graphical user interface, motivating Microsoft (and others) to follow suit. As the industry took shape, channeling the ethos of augmentation, Engelbart would see his convictions vindicated. Alas, he would do so from the sidelines, growing increasingly obscure within academia while others generated unprecedented wealth and influence.
Despite significant progress in fields like machine vision and natural language processing, the enthusiasm around AI would wane by the mid-70s. The post-war promise of machine intelligence was nothing short of revolutionary, and the hype had failed to deliver. The American and British governments curtailed large swaths of funding, publicly chiding what they felt had been misguided investments. In the estimation of one AI researcher, Hans Moravec, the "increasing web of exaggerations" had reached its logical conclusion. The field would enter its first "AI winter", just as the personal computing industry was igniting.
It's difficult to analogize the rise of personal computing; there is hardly an inch of our social, economic, or political fabric that hasn't been affected (if not upended) by the democratization of computational power. While we don't necessarily view our smartphones, productivity suites, or social apps as encapsulations of augmentation - they are replete with the concepts put forth by Engelbart, Bush, and other pioneers. Even so, some argue that the original vision of intelligence augmentation remains unfulfilled; we have seamless access to vast quantities of information, but has our ability to solve exigent problems improved commensurately?
Since the original winter, AI has continued to develop in cycles. Suffice to say, we're in the midst of a boom; compounding advancements in commodity hardware, software for processing massive volumes of data, and algorithmic approaches have produced what's now estimated to be an $8B market for AI applications. Media and marketing mania aside, there is basis for today's hype: organizations that sit atop immense troves of data, such as Facebook and Google, are utilizing methods like deep learning to identify faces in photos, quickly translate speech to text, and perform increasingly complex tasks with unprecedented precision.
However, even the most sophisticated of these applications is an example of narrow AI; while impressive, it is categorically different than general AI - the sort of machine cognition that was heralded during Engelbart's time, which has yet to appear outside of science fiction. Many of today's prominent AI researchers still consider general AI to be the ultimate prize. DeepMind, a prominent research group acquired by Google, has stated that it will gladly work on narrow systems if they bring the group closer to its founding goal: building general intelligence.
Will AI become the dominant paradigm of the next 30 years, in the way that augmentation has been for the past 30? Perhaps the question itself is needlessly dichotomous. Computing has grown to occupy a central role in today's world; surely we possess the means to pursue fundamental breakthroughs in both augmentation and AI. It's telling that Elon Musk, concerned about the unfettered development of AI, has also created Neuralink, a company aiming to push augmentation into the realm of brain-computer interfaces.
The frontiers of both paradigms are expanding rapidly, with ever deepening investment from companies, governments, universities, and a prolific open source community. As exciting as each is individually, it stretches the imagination to think about how the trajectories of Artificial Intelligence and Intelligence Augmentation might intertwine in the years ahead.
Buckle up.