Trolled Into Sustained Conflict

There’s an axiom in behavioral psychology: examine what someone is doing, rather than simply accepting what they’re telling you. Someone might claim that they don’t care about anyone, yet their actions betray a deep care for a family member. Or maybe they claim they love their job, despite the fact that their daily actions suggest otherwise. Even if you’re not a psychologist (I’m not), cognitive dissonance can be useful when examining narrative credulity.

The conventional narrative is that 2016 ripped the political dialogue in America asunder, pushing the left and right to unprecedentedly adversarial positions. If you’re a self-identified progressive, then it’s clearer than ever who the enemy is: Donald Trump, and his invidious apparatus (an unhinged GOP, uncurbed corporate avarice, and other sons of plunder); they seemingly won’t stop until The Commons are completely gutted, the environment is ruined, and corporations have twisted the world into something resembling Snow Crash. And if you’re conservative, it’s equally clear: liberal zealotry has gone nuclear, infecting the educational system, science, and every orifice of conventional and new media - spreading a poisonous anti-Western ethos that won’t stop till the tenets of capitalism and Judeo-Christian morality are obliterated. There are seemingly no sacred cows, among the values that have built and sustained Western civilization.

(You’re somewhere in the middle, or nowhere identifiable on this scorched map? Oh hey, welcome - the ginger ale is in the back.)

Implicit in each side’s fervor is that they’re seeking a resolution; some new reality that, once manifested, would eliminate the need for partisan warfare. Generally speaking, progressive factions want to end the current nightmare and bring about a redistributive renewal that operates upon technocratic rationality. Conservatives want to stem the tide of liberal influence and fortify an American apparatus that facilitates the strength of individual sovereignty. The long-term goals for each side are unsurprisingly nebulous; though perhaps it isn’t reasonable to expect people to have a fully-formed conception of a “target society”.

Moreover, in each case, it isn’t clear what happens to the other side, once victory is achieved; do they simply acquiesce? Do they keep their views, but lose all levers of power? Again, maybe people simply haven’t thought through how this ends.

But there’s a more unsettling thought: neither side wants the conflict to end.

Let’s think back to the apocryphal origin of the Culture War: a particularly seedy bootloader emerged from one of the darker corners of The Internet (4chan), flooding an already-fragile American dialogue with polarizing memes and all flavors of misinformation. The resulting tensions continued to escalate until Trump was elected, at which point things really snapped - plunging us into the societal anti-pattern we now occupy. The story has a strange, almost mystical nature; as if some cosmic mover set forth a cycle of digital chaos.

Don Jolly shared a very astute observation in a recent discussion with Digibro: trolling is rarely intended to fortify or demolish a particular position. Instead, it’s often meant to engender and sustain conflict. I don’t think you can ascribe a singular motive to a mass of trolls from 4chan, but you could imagine an emergent “limbic” optimization - i.e, collectively find a way to maximally exacerbate the political tension that’s causing everyone (especially establishment figures) to lose their minds. The more successful the trolling, the more satisfying the results. Some might retroactively assert that the goal was to elect Donald Trump (the ultimate troll), but I think it’s clear that what endured wasn’t a specific reality, but rather a modality of sustained conflict.

urameshihill.jpg

If you observe what both online camps are doing, it certainly looks like conflict is the objective function. Every morning is a fresh pull at a Vegas slot machine: a new headline whips across social media, providing a focus for the day’s tribal warfare. Some headlines are perfect fodder (e.g., scissor statements), while others burn slower or require an editorial catalyst. The cost of participation is low (relative to physical conflict), and the potential entertainment value is unpredictably high. It almost resembles an e-sport; you hop on, coordinate with your team, and find a way to “win” the day’s exchanges - whether through upvotes and signal amplification, or publicly irritating the other side.

I don’t blame anyone for falling into this modality, given the sheer amount of stimulation that comes from participating. I mean, you could sit in the corner and recite Stoic meditations to yourself instead of tossing memetic grenades on Twitter with your friends - but really? Tuning out takes serious self-discipline. It’s a lot easier to jump aboard the Outrage Express, and see where it will wind up at the end of the day. The modern working day is littered with small pockets of free time, and if there’s a reliable way to get a dopaminergic response in a few taps, your brain isn’t going to pass it up easily.

Things get especially sticky when there’s sustained interaction between specific adversaries. As stimulating as general tribal participation might be, it doesn’t hold a candle to individualized animosity. It can be unsettling to watch these sorts of relationships metastasize; the purported context becomes less relevant, and the focus becomes the other person. Each day’s slot machine is a chance to see what incendiary thing your adversary has said; what position they’ve taken that might inadvertently expose them; an opportunity to turn more people against them; a chance to see them publicly buckle.

To be clear: many people do harbor earnest desires for (subjectively-defined) positive change, and I don’t think anyone reasonable can debate that. It’s just unclear how much that matters, when paired with a method of engagement that 1.) can’t sustain a discussion for longer than a few days and 2.) seems comically tuned towards theatrical conflict. It doesn’t help that this modality seems to be infecting all sorts of areas that aren’t explicitly political. Discussions on topics as varied as technology, education, and science have become similarly combative - often containing “proxy” tribes that you could readily identify as progressive or conservative.

Given that we seem stuck in this combative anti-pattern - how do we get out? That’s probably a topic for another essay, but the short answer is: I don’t think it’s practical to try to neutralize the voltage; a stimulation engine like this doesn’t simply turn off. We need a set of competitive modalities that can redirect our tribal impulses towards potentially fruitful ends; a set of alternatives that will make the Vegas outrage machine seem boring and unsatisfying by comparison. Perhaps it’s a new form of competitive factionalism, recalibrated by the weight of real actions and consequences, that incentivizes the construction of clear outputs rather than degenerate deadlock.

Have any ideas? You can find me on Twitter

The Lost History of Business Objects

It’s oddly difficult to find a detailed history of Business Objects, a seminal forerunner to the modern wave of self-service data analytics. The company was acquired by SAP in 2007, and most people nowadays only know about Business Objects as part of SAP’s “BODS” package - unaware of its standalone contributions to the software industry. Bernard Liautaud, co-founder and former CEO of Business Objects, gave a very insightful talk at the London Business School in 2014; from the recording, I was finally able to piece together the story.

Rewinding to 1990: Oracle was an emerging software juggernaut, spreading the gospel of the relational database throughout the largest public and private organizations in the world. SAP and other transactional software packages were successfully building business applications atop the new database technology. The world had collectively awoken to the fact that they needed robust, digital appliances to store their most important information. Bernard Liautaud was working at Oracle in Paris as a pre-sales engineer at this time, observing the mania firsthand.

bernardliautaudcover.jpg

The explosion in relational database adoption had introduced new ways of working: business users, particularly in areas like finance and logistics, were now able to crunch granular data pertaining to sales and inventory, create high-fidelity reports on the state of the business, and generate data-driven forecasts. The bottleneck, though, was typically the database administrator (DBA); this person managed the enterprise’s databases, triaged the glut of incoming data requests, and then figured out how to sensibly sequence the delivery. Each corporation’s Oracle installation was a production system, often serving as the source of truth for sales, inventory, and other critical functions; irresponsibly running expensive operations against this system could introduce major disruptions to the business - and DBAs served as the first line of defense.

Bernard observed that companies were desperately looking for ways to alleviate this bottleneck. At the time, relief typically only came through hiring more DBAs. SQL was still an emerging standard, and training (and trusting) business users to write well-crafted database queries was a bridge too far, for most enterprises. Bernard happened to meet an independent engineer, Jean-Michel Cambot, who had a crazy idea for enabling the masses of non-technical users. Cambot envisioned a semantic layer consisting of “business objects”, that provided an intuitive abstractions atop the relational database. Using such a layer, business users would be able to craft queries and assemble reports, using point-and-click interfaces rather than code.

After Oracle’s marketing department passed on the idea, Bernard decided to form a new company to independently pursue it. Cambot wasn’t interested in a rigid working structure, but agreed to a 25% royalty on all sales in exchange for building the software - a deal that turned out to be very lucrative for him.

Now on his own, Bernard pitched the initial version of Business Objects to his former coworkers; while Marketing had panned the idea, Sales was willing to listen. Oracle was in a fierce battle with Sybase, and they were hunting for any edge that would allow them to corner the French market. Sales leadership found the pitch compelling; Business Objects would provide a novel “self-serve” analytical capability that extended the core value proposition of the relational database - and critically, for the time being, it only worked with Oracle. This clear alignment produced a half dozen quick sales for Bernard’s fledgling team.

As sales begin to flow, Bernard needed additional funding to properly build out the initial team. In 1991, Business Objects became the first French software company (as far as they knew) to get seed funding ($1MM) from Silicon Valley investors. With the money, they were able to hire ~eight people in France, and critically, prove that they could sell to the American market. $1MM in sales in ‘91 gave way to $5MM in ‘92, and $14MM in ‘93. The value proposition was extremely legible, and their complementary relationship with Oracle continued to provide strong avenues for growth.

In 1994, after four years of growing at 150-200% YoY, Business Objects went public. Bernard and the board elected to debut on the NASDAQ, rather than a smaller European exchange. The drumbeat of being a public company then followed; product lines expanded, along with geographic presence. Bernard recalled that it was fairly smooth sailing for the first 18 months; growth continued at a steady pace. By the end of 1995, the market cap had grown 10X.

Around this time, Bernard and team felt a growing need to revamp the core product. Competitors from North America were gaining serious momentum, and Business Objects’s core analytics and reporting capabilities were becoming less differentiated. The company reorganized around a massive internal R&D project, which they intended to completely replace the existing product line. They were specifically betting on the launch of Windows 95, which promised to bring sophisticated GUI-based computing and a host of new features to desktops around the world.

BO_5.jpg

Despite repeated warning signs from the development team, Bernard pushed hard. The revamp ended up launching in very buggy fashion, and to a much smaller Windows 95 install base than Business Objects had anticipated. Bernard faced anger on two fronts: early adopters of the revamp were understandably frustrated with the quality of the initial release, and legacy customers were growing increasingly frustrated as well. Business Objects had largely neglected the existing install base, as they’d surged to release the revamp. And to make matters worse: as 1996 unfolded, a massive deal fell through in Germany, causing the company to restate their earnings.

The vultures began circling, floating offers to buy Business Objects for ~$100MM - a price tag that would’ve been insulting the prior year. Bernard, feeling more heat than ever, decided to resist the siren call, and instead own up and dig in. The company took a series of dramatic actions: the management team moved to the US to get closer to its primary software partners and investors; the development team axed most new feature work, and focused exclusively on shoring up the quality of the revamp. Not everyone was on board: sales representatives were frustrated, knowing the slog meant they wouldn’t meet quotas. Bernard recalls that things stayed painful for a while, but they were doing what was necessary.

By the end of 1997, Business Objects had managed to come up for air. They’d made significant strides in fixing the revamp (now firmly marketed as their core offering), which had repaired trust both internally and externally. A focused new feature set around making Business Objects reports available on the web, launched first-to-market; the influx of popularity boosted growth by 50% in 1998. By 2000, the company had grown to $300MM in annual sales, with a defensible product mix and financial standing that allowed them to weather the bursting of the internet bubble. As Business Objects continued to grow, bringing in $500M in 2003, the market was undergoing another shift.

The business reporting market was evolving into “business intelligence”, and demand was exploding for interactive web-based analytics, which provided a more dynamic experience than traditional reports. With the Canadian firm Cognos nipping at their heels, Business Objects acquired Crystal Decisions - a pioneer in the realm of pixel-precise, interactive web dashboards. Paired with Business Objects’s core object layer and baseline reporting functionality, the acquisition provided immense leverage. By the end of 2004, Business Objects was in a clear leadership position; and by the end of 2007, they had ridden the veritable hockey stick to $1.5 billion in revenue.

Around this time, SAP came knocking - asking to have “strategic talks” (i.e., probing for an acquisition). Oracle had sent a lowball offer the year before, and Business Objects knew they were in a position of strength. SAP was thoroughly dominating the ERP space at the time, and saw clear opportunity for expansion through tightly integrated business intelligence offerings. They made an offer for $7 billion, a 40% premium on Business Objects’s market cap. Bernard recalls the acquisition was friendly, and made good sense to all parties.

bo_SAP.png

Today, Business Objects remains a pillar in SAP’s core offering. If you browse around YouTube, you can see some of the core concepts still very much alive in today’s product: the concept of data “universes”, that contain data foundations, which in turn house business layers that are full of intuitive objects; Crystal Reports are a click away. Obviously a lot else has changed; cloud-native offerings now dominate the business intelligence market, and new data-driven software services are emerging daily.

But the underlying motif isn’t so different from what Bernard saw almost 30 years ago: the need to abstract away technical complexity, and plug business users directly into an intuitive representation of the underlying data.


Thanks for reading; you can check out my Twitter here

NeXT’s Vision: Developing Applications on the 10th Floor

Old keynotes from NeXT Computer have a compelling, peculiar focus on developer enablement.  In the late 80s and early 90s, the company was targeting the higher education and enterprise markets - and banking on a newfangled wave of programming paradigms.  After stumbling across a few keynotes on YouTube, it was interesting to delve a bit deeper into the pitch that was being made at the time.

By the late 80s, Apple and IBM had brought computing out of the realm of hobbyists and onto the desks of corporations around the world.  Workplaces were beginning to rely on VisiCalc, desktop publishing software, and other early cornerstones of enterprise software.  Myriad workflows that once involved manual record keeping, or pen-and-paper tabulation, were rapidly being ported into the digital realm.

next_jobs.jpg

As the revolution advanced, and graphical interfaces began to take center stage, the growing sentiment was that the skill floor was rising on application developers.  These new PC platforms had introduces significant complexity along with their new functionality, requiring development teams to contend with a larger amount of system-specific “plumbing” in order to produce compelling programs.  Moreover, implementing the same program across multiple operating systems had become more complex - given the increasing sophistication of the OS layer.  “The software developer paid the price for the Mac revolution”, Steve Jobs quipped during the 1988 NeXT debut. 

NeXT was betting that it could break this fever.  Starting with a UNIX foundation, the company’s goal was to create the best experience for developers that were building applications for the high-end “workstation” portion of the PC market.  Target customers included research universities and very large enterprises that were demanding cutting edge graphics and computing power.  NeXT’s conviction was, through the then-new paradigm of “object-oriented programming”, they could provide the best-in-class experience for software developers catering to these high-end customers.  Drawing heavy inspiration from Xerox PARC’s Smalltalk initiative, Bud Tribble’s team at NeXT leveraged a new object-oriented variant of the C programming language, “Objective-C”, within the NeXTSTEP operating system. 

Tribble’s team produced a handful of comprehensive frameworks written in Objective-C (e.g., AppKit, Interface Builder), which were aimed at courting developers to the NeXT ecosystem.  The pitch was simple: you’ll spend far less time developing applications if you leverage these frameworks, and you’ll end up with applications that are much richer than what you could’ve developed yourself.  Scientists from top universities were brought on stage to discuss how they planned to use NeXTSTEP to build the next generation of genomics software; major corporations shared visions on the next generation of worker productivity.  “It’s like starting on the tenth floor as a software developer, versus starting by yourself on the first floor” Jobs would say, on several occasions.

NeXTSTEP_desktop.png

Lotus and other software shops were brought on stage at NeXT events, heralding that the company’s object-oriented libraries made it “the best development environment in the world, full stop”.  For many years, NeXT kept this enablement narrative at the center of its marketing: gone would be the days of repeating large swaths of code pertaining pertaining to things like interface design, cross-application communication, and database connectivity.  As the early 90s unfolded, NeXTSTEP began offering object-oriented libraries for leveraging the newfangled internet, claiming that it shouldn’t be the developer’s responsibility to rebuild common primitives around networking, security, and file sharing.

While NeXT wouldn’t become the company that ultimately delivered object-oriented development to the masses, its foundations live on in iOS and macOS.   Entire posts could be written about the legacy of NeXT-pioneered technologies like Display Postscript, the object-oriented travails that interlink with the heyday of C++ and Java, and the broader evolution of client-server architectures.  But, taking a step back: I think that the siren call that NeXT put out for developers, strangely enough, is something that endures.  The world needs sophisticated domain-specific applications more than ever.

Today, much of the enterprise market is fixated on “cloud-first” operating models, and new methods of divining information out of vast pools of raw data.  There is fervent excitement around services that help integrate the massive silos of enterprise data, and services that promise to unlock the power of machine learning.  But the need for custom applications (akin to those that NeXT hoped to underpin) hasn’t gone away.  If anything, the explosion in the variety of viable data sources, along with the sheer amount of discrete capabilities available in the cloud, has meant that the need for robust software primitives is once again front and center.  A quick tour around the typical enterprise today will reveal that, despite the deluge of data, most people are still wedging most of their digital workflows through Excel.

data_cloud-100577480-primary.idge.jpg

Organizations today are being sold data lakes, API catalogs, pipelining tools, et al; the plumbing that is indeed necessary for pushing forward an increasingly heterogeneous mix of cloud services and vendors.  But isn’t that all just table stakes?  Among the sea of new cloud technologies, where is the “tenth floor” boost for developers who are building new tools for users throughout businesses and research labs?  What’s the technical (or usability) lever that will unlock the next generation of worker augmentation, that pushes us beyond the realm of semi-static dashboards and reports?  

It’s still early days - but I’d love to see a revivified version of NeXT’s vision, championed within the current landscape.

Thanks for reading; you can check out my Twitter here

Identity-Centric Heuristics for Traversing The Internet

Growing up on The Internet, most of my online expeditions were subject-centric: I would seek out particular forums that I found relevant - an EverQuest forum, a message board dedicated to the odd anime that I’d just watched - and explore the communities therein.  It was a relatively quiet era in cyberspace; just by virtue of being online and interested enough to seek out a niche group, you had a decent likelihood of finding both meaningful content and interaction within those narrow venues.

Today, there’s an effectively inexhaustible amount of narrow venues, instantly available through any screen.  There are subreddits, discord channels, and countless other places for discussing any topic, at any granularity, that you can imagine.  If you apply a bit of filtering, you can still find interesting content and uncover pleasant communities; in my experience, these alcoves are great for skimming content and passing time.  But they also tend to be noisy, and curation is tuned to the lowest common denominator - resulting in a lot of superficial or redundant information.

Over the past few years, I’ve spent less and less time in subject-centric spaces.  Sure, they’re still useful for passing time, and keeping a pulse on what’s topping the relevancy charts at any given moment.  But I now spend the majority of my online time on content from a handful creators that I discovered through YouTube, SoundCloud, or podcasts.  And rather than burrowing within specific sites or channels, I’ve chosen to invest time in individual personalities that span multiple sites, topics, and even mediums.  It’s increasingly irrelevant “where” a particular conversation or piece of content is located.

I think the shift has been subtle, but profound: the democratization of content creation is allowing people to forge online identities that aren’t bound to any single platform or subject.  A personality can emerge on YouTube through a few interesting videos, and then extend into the realm of podcasting, establish a Twitter presence, and be in effectively constant engagement with interested viewers.  The mainstream media depicts identity-centric content as vapid, fleeting entertainment; Instagram models, slapstick YouTubers, and other self-infatuated expression.  But the well runs much deeper, if you know where to look. 

Nowadays, my consumption patterns resemble something Pareto-esque.  I.e., at any given time, >80% of the content I’m consuming is coming from a handful of creators. While this shift wasn’t initially deliberate, I’ve found two heuristics that have worked reasonably well for navigating the identity-centric web.

The first heuristic is analysis-first: what media, or subject matter, has hijacked your mind recently?  Find a compelling video (or article) that examines it critically.  I think about SuperBunnyHop’s analysis of Metal Gear Solid 2, which methodically deconstructed the infamously enigmatic game.  I found the video to be equal parts captivating and irritating; the precision and depth of insight left me wondering how I’d failed to synthesize a fraction of what was presented.  Shortly thereafter, I delved into the rest of SuperBunnyHop’s analysis videos, followed him on Twitter, and gradually became aware of the wider network of personalities that he collaborates with.

In a very different sphere of media, Charlamagne Tha God struck me as an unafraid, incisive voice in a sea of pundits producing softball interviews and superficial analysis of pop culture.  Whenever someone like Kanye West made the headlines, Charlamagne was seemingly the only person willing to say the same things on social media that he did to the person he would interview.  Unsurprisingly, his popularity has expanded far beyond New York radio; I regularly listen to his podcast with Andrew Schulz, and generally tune into the conversations he has with other media figures and celebrities that intersect “mainstream” culture.

meaningwave.jpeg

The second navigation heuristic is synthesis-first: who’s creating new things that deeply resonate with you?  Akira The Don’s music has been the soundtrack for large amounts of my past year; he’s combining catchy, upbeat production with messages from noteworthy thinkers that I was already interested in learning more about - like Alan Watts, Terrance McKenna, and Jordan Peterson.  Meaningwave is a genre he’s pioneered, but the constituent elements - poignant dialogue, upbeat electronic music - were things I’d been interested in beforehand.  Through Akira’s work, I subsequently discovered a trove of interesting books, lectures, and other artists.

Other synthesis-first examples include Mitch Murder, the Synthwave artist that plunged me into the depths of the myriad subgenres of retro music and visual design; Toby Fox, who shared a remarkably charming, poignant story through his game Undertale; and the indie Mac software studio Panic, which is enmeshed in a network of other creative personalities.  In each case, a single product or piece of art grabbed my attention, and compelled me to explore the adjacent conceptual and social spaces.

These two heuristics are by no means exhaustive; they’re just cursory attempts to articulate how my media habits have been shifting over the past few years.  I’m spending less time in narrow venues that are subject-centric, and more time following individuals whose presence spans many platforms.  Traversing through networks of creators has been an enjoyable way to learn new things, and it’s a constant reminder that there is interesting material lurking in unexpected places.  As the pace of content production continues to accelerate, I’m curious to see how my day-to-day habits will continue to change.


Thanks for reading; you can check out my Twitter here

The Archetypal Resonance of Classic JRPGs

When I learned about Xenogears, I knew that I had to play it. It was an irresistible package, seemingly tuned to my idiosyncrasies: a Japanese RPG from the 1990s golden era, featuring a famously deep (and convoluted) plot, and with a fanbase that still discusses its underrated status with almost religious zeal.

220px-Xenogears_box.jpg

Despite my excitement, playing through Xenogears proved challenging. I’d forgotten the unique tedium of Playstation 1 RPGs: a story that’s rife with uneven pacing - due in part to production constraints; lots of random enemy encounters; the cadre of frustrating mini games that you needed to beat in order to advance the plot. I became quickly aware of how my short my patience with games had become, and found myself stunned when the timestamp on a save point indicated that only eight minutes had passed since the last time I’d checked my cumulative play time.

The compelling aspects of the game kept me going. The 2D sprites within a 3D world provided a memorable aesthetic that I think aged far better than the purely 3D RPGs from the same era. The music throughout the game is incredible, thanks to an effort by Yasunori Mitsuda that literally cost him his health. And the story legitimately hooked me, right off the bat - featuring a blend of bizarro gnosticism, psychology, and mecha that really only compares to Evangelion. (Xenogears and Evangelion were contemporary productions; I couldn’t find a definitive account, but it would be stunning if there wasn’t some sort of cross-pollination between the projects.)

The early pain ended up feeling like an investment, which steadily paid off as the game progressed. The combat system, initially overwhelming and unclear, developed a compelling dynamism as the characters advanced. Each difficult boss conquered and story point reached felt like an accomplishment; this was a rare, weird game - and working through it felt like a valuable, uncommon expedition.

The story ratchets up at the 11th hour, with a standout “Room of Understanding” that rivals anything I’ve seen in any medium. By trudging through the archaic game design and witnessing the late-game exposition, I felt like I’d earned access to hidden information. Upon finally finishing the game, I excitedly dove into the world of analysis videos and discussions online; I was a newfound convert, ready to partake in the zeal.

Having worked through the lore and oddities in the extended Xenogears universe, I now feel compelled to work through the other great RPGs of the era. In addition to the mainstays (e.g., Final Fantasy 7-9), I’m especially keen to play through the other titles that have been forgotten or underrated, like Chrono Cross and Terranigma.

descent.jpg

It’s struck me that this all sort of strange; what exactly am I doing? Why do I feel compelled to undertake these virtual voyages? There’s the reflexive explanation: like with any other artistic medium, it’s worth experiencing the underrated classics. That’s especially sensible in the case of Xenogears - given the industry legends that contributed to the game. You could also point to nostalgia, the desire to feel iconoclastic, or some idiosyncratic motivation to reach backward to this specific realm of obscurity.

But maybe there’s some deeper resonance at play. Suffice to say, the average PS1 JRPG contains a very archetypal story. The plot revolves around the hero’s journey - featuring a protagonist that starts off in meager circumstances, who is then thrust into an epic quest. Along the way, the hero accumulates a ragtag set of friends; some of these friends are standard companions, and others have complex arcs that begin adversarially. Crew in tow, the hero then grows steadily in power and influence, eventually playing a pivotal role in an ultimate clash between good and evil.

1-cheating11.jpg

The gameplay is similarly orthodox: you start with a hero that’s weak and with few companions; the hero eventually grows, through increasingly adversity, into the archetypal leader that is capable of defeating the ultimate evil. The progression of difficulty in classic JRPGs is sometimes jarring and uneven - but there’s generally a strong correlation between the player’s growing capacity and the level of challenge placed before them.

I think there’s a very specific rhythm to actively playing through an archetypal journey, compared to watching or reading about one. You feel the frustration of the hero’s initial insufficiency; the relief of having a capable companion join you; the accomplishment of finding the way through a seemingly unbeatable battle. It’s a conscious psychological traversal that’s only possible through an interactive medium, and it seems especially distilled in the classic JRPGs.

room.png.jpeg

We’re living through a period of immense change, chaos, evolution - however you prefer to label it. There’s a growing feeling that most every conventional axiom is up for redefinition: consensus morality, the nature of personal identity, our rights and our responsibilities to one another. In a sea of societal flux, familiar mythopoetic stories can feel like a life raft; a girdling force, capable of vividly illustrating the physical and psychological patterns that have endured across millennia, and that will likely continue into the future.

Can games like Xenogears function as narrative psychotherapy? I’ll stop short of making that claim - but I also won’t dismiss the possibility. My humble plan is to continue playing through these classic RPGs, without any sort of clinical precision. I’m not sure what exactly I’m looking to surface, or at what point I’ll reach sufficiency; I don’t think it’s realistic to play every JRPG under the sun.

But for now, this feels like an exercise worth continuing. Hopefully the virtual traversals will give way to some clearer understanding, over time.

Thanks for reading; you can check out my Twitter here

Social Outrage in the Fourth Dimension

Most of today’s social media scandals emerge in one of a few ways.  Either there’s something recently posted that’s scandalous, and triggers an uproar. Or, there’s something hidden in the archives of someone’s social media channel, which is resurfaced to today’s more unforgiving eyes.  (e.g., Kevin Hart and the Emmys controversy.)

In other cases, people get in trouble for engaging online with someone (or something) incensory.  A politically incorrect tweet was retweeted; a salacious Instagram post was liked; an upstanding person is following someone with extremist views.  These sorts of scandals half a pretty short half life; it’s easy to chalk them up to user error (“I didn’t mean to do that!”), or redirect the blame (“It was my millennial staffer!”)

In an effort to stay out of flames of the culture war, many people are proactively scrubbing their accounts.  Unfollowing people that make for questionable associates; unliking tweets that might be hard to explain later; sometimes altogether shutting down their social media accounts.  With enough foresight, this approach can work reasonably well.  There’s technically still record out there, on some server somewhere, of what you did; but, in all likelihood, the surface area for an unwelcome digital scandal has been significantly reduced.  

It’s hard to imagine that things will stay this simple.

Think of a popular paradigm that exists today: Apple’s Time Machine application on the Mac, which gives you the ability to “go back in time” to previous versions of a given file.  This is possible through local indexing and copying, which happens on set intervals (or in response to specific triggers).  Now think about an analogous service, that’s capturing the transactional state of every public social media account, from inception onwards.  Kind of like of the Wayback Machine, but on steroids.

This is understandably unnerving - but feels inevitable.  People will need to assume that there will be a record of every public message, regardless of subsequent deletion; or of every person they’ve followed, even if they’ve subsequently unfollowed them.  Certain folks, like Jack Dorsey, believe that the eventual pervasiveness of blockchain technology will make online interactions truly permanent. 

I don’t think that’s necessary.  You simply need an aggressive extension of paradigms we’ve already seen work in more constrained systems.  If and when this sort of deep-scraping begins to escalate, I’d imagine that platforms like Twitter and Facebook will introduce new limitations on their APIs, and throttle the ability for snooping agents to build this sort of temporal knowledge base.

At that point, though, the retrospective ability isn’t gone; it’s just been constrained to the hands of the platforms themselves; the same as it is today.  Will it be an acceptable compromise to trade the ability for third-parties to deep-scrape public content for even tighter “stewardship” by the platforms?  Unclear; though it’s hard to imagine that this sort of API lockdown would hold water in the EU’s regulatory bodies, over any reasonable period of time.

At which point there’s maybe a regulatory compromise: users can get access to the “deep” history of their social interactions, but nobody else.  At this point, a user’s account becomes an even juicier target for pernicious actors.  You don’t just access to someone’s direct messages, but also every prior version of their follower/connection graph, and every piece of content they might’ve withdrawn association with.

Of course, this sort of escalation is contingent on people continuing to value what other people are posting, and who they’re associating with.  This seems likely, given the arc of human history and whatnot.  There’s a vicious tribalism that relishes in social crucifixion.  But there’s also an emerging, redemptive counterweight: in some cases, we’re accepting that people can grow beyond their online mistakes.

There’s a social realm that exists between hollow apathy and searing inquisition; it’s probably not a fixed position.  How can we incentivize people to stay in that realm, and to apply minimum necessary force when addressing social infractions?  

A question for the times.

Branching Beyond Twitter

Twitter is easy to pick on; it feels increasingly incoherent. For the average user, the experience now amounts to watching tweets endlessly (and algorithmically) flow down a timeline, hoping for a fleeting gem: a fresh meme, a particularly inspired presidential tweet, or an endorsement for something worthwhile.  You can try to prune and mute your way to sanity, but most curation features (e.g., lists) feel barely supported.

Paradoxically, Twitter also feels more vital than ever.  It remains the world’s digital public square - relatively uncensored, and gushing with content (and spam) at increasing velocity.  Interestingly, the worthwhile unit of content remains the individual; you can follow news organizations if you feel like drinking from a firehose, but the interesting activity happens between active users.

It’s unfortunate, then, that discourse on Twitter feels like it’s regressed since the early days of the platform.  I think the kernel of the problem is the tendency to get stuck where you start, with little recourse.  You join the platform, and begin by following other people.  This is in itself rewarding, since you can follow people at a granularity that isn’t possible on other popular networks.  (e.g., check out Nassim Taleb’s disdain for Sam Harris!)  But to get someone’s attention, you need 1.) some form of preexisting notoriety, or 2.) a particularly inspired tweet that grabs their attention.

Content aggregators, like Reddit, allow new posts to gain popularity through a different paradigm: topic-segregated channels.  You might have joined yesterday, but your post in the Gaming subreddit can get you a ton of Reddit karma, if you find just the right content.  The tradeoff with this paradigm is that content is truly king.  Submissions are effectively anonymous, and despite the occasional heartwarming exchange in the comments, the social interactions are overwhelmingly transient.  You leave the thread, usually never to return to it or its denizens.

Theoretically, Twitter’s hashtags provide a topic-like anchor.  In reality, I don’t know anybody that uses hashtags outside of live sports and other large-scale, transient events.  It’s a navigation lifeboat, used as a last resort. 

So what could a better modality look like?  If you think about why discourse is difficult on Twitter, a lot of it boils down to the UX.  If you’re decently famous: you tweet something, and there’s a gush of replies, smashed together like an accordion.  If you’re not famous, and talking “laterally” to someone else, or a small group, then the replies string together endlessly.  Someone else can try to jump into the conversation, or fork the thread, but typically with non-obvious consequences.  Even fruitful threads die quickly, and are difficult to revisit or revive.  (Which tweet did that conversation revolve around?  I can’t seem to find it..)

Branch was a social media service that tried a different approach.  Billed as the platform for “online dinner parties” (bear with me), it organized conversations around organic topics, and allowed - as the name suggests - users to branch the conversation, at any point.  A key feature was the separation of reading and writing.  Users could selectively include participants in a small-group discussion, which could then be observed by anyone else using the platform.

The result was, surprisingly often, interesting dialogue that could become progressively more inclusive - without devolving into chaos.  A lot of threads were simply interesting to read.  It was great if you were invited to participate - but even if you weren’t, you could simply branch a comment into your own thread.  The same rules carried over; you could selectively add people to your forked conversation, and continue on.  And who knew, maybe your forked dinner party would become the next hot thing.

1670904-inline-branch-with-text-box2.png

Branch’s user interface revolved around discovering interesting dialogue.  The fundamental unit wasn’t the individual post (with dialogue as addendum), but rather the conversation thread itself.  The application highlighted which threads were gaining popularity, and allowed you to traverse conversations that included the particular people that you found insightful.  It didn’t have a very elaborate UX, and it seemingly didn’t need one.

One consequence of the design was that it felt natural to (periodically) revive dormant threads.  Each conversation had a limited set of participants, and a coherent topic - which together provided a stable context that could be revisited.  Sometimes it made sense to simply tack on a new comment to an old thread; other times, branching was the answer.  And again, any reader had the same power; if you had a flash of insight, or came across something amusing, you could take someone else’s old conversation in a new direction.

At its best, this sort of seed-and-branch cycle felt resonant with the original ethos of the internet; a distributed and organic approach to building knowledge, that could endure.

Alas, Branch is no longer with us; the team was acquired by Facebook, and the service was shut down in 2014.  It’s peculiar that nothing similar has appeared, since.  Nothing that is conversation-centric in the same way; or that allows for conversational branching.  Interaction paradigms across the social media landscape feel increasingly static; Facebook and Twitter remain as largely as they were a decade ago, and the various chat apps have added bots and gifs, I suppose.  I hope we see more experiments like Branch, either as standalone services or within increasingly-vital platforms like Twitter.

We’re in dire need of better conversation - perhaps that’s the one consensus that still holds. 

Teradata's Lawsuit Against SAP

Teradata is a large enterprise software company; SAP is a larger enterprise software company.  At one point, the two were partners, working extensively on ways to harmonize their respective products.  Then things fell apart - dramatically; and in June, Teradata sued SAP.

What went so horribly sideways?  In a nutshell, Teradata alleges that SAP used the partnership to learn about Teradata’s core data warehouse offering, so it could then engineer its own competing solution.  Moreover, Teradata claims that SAP is increasingly focused on making its own competing data warehouse the only viable choice for interacting with its other, more established products.  To understand the fear and anger that now seem to be driving Teradata, it’s worth unpacking a bit of context.

sap_logo_2.jpg

SAP has historically dominated a category of the software market known as “enterprise resource planning”, or ERP.  Despite the painfully generic name, ERP systems provide a vital function: they manage the raw information pertaining to core business functions - including inventory, supply chain, human resources, and finance.  Whether you’re trying to understand your budget for the quarter, or calculating whether you have the appropriate inventory for fulfilling a client order, you’re probably interacting with some sort of ERP system.  (And there’s a high chance it’s provided by SAP.)  A standard SAP ERP installation is operating across multiple business lines within a single organization, processing millions of data transactions per day, and acting as the source of truth for information related to suppliers, finances, customers, employees, and more.

While SAP’s ERP systems provide critical capability, they don’t address every data-driven need.  Namely, ERP systems have not been historically optimized to serve the needs of analysts that need to aggregate findings from the raw data.  Asking complex questions of large volumes of ERP data turns out to be a technically challenging problem; users want the ability to ask lots of questions simultaneously, receive responses quickly, and work with both the analytical questions and answers in their preferred software tools.  Having a system that’s tuned to fulfilling these “business intelligence” requirements turned out to be so valuable that it gave rise to different class of companies.

Enter Teradata: founded in 1979, the company’s flagship “enterprise data warehouse” is intended to fill exactly the type of analytical gap that is left unaddressed by ERP systems.  The typical data warehouse is essentially a specialized database that sits atop a customer’s transactional systems (e.g., SAP’s ERP systems) - pulling in subsets of data, and storing them in a manner that is optimized for efficient retrieval.  Analysts can use familiar applications (e.g., Excel, Tableau) to quickly pull slices of data from the data warehouse, in order to answer questions and produce critical business reports.  Teradata claims to have pioneered a “massively parallel processing” (MPP) architectural design that allows use of its data warehouse to scale linearly across thousands of end users, without diminishing performance for individual queries.  

teradata-big-systems.png

The complementary relationship between data warehouses and the underlying ERP systems led Teradata and SAP to partner, and announce a “Bridge Project” in 2008.  The crux of the project was dubbed “Teradata Foundation”, a jointly engineered solution that promised seamless data warehousing functionality, backed by Teradata, for customers using SAP’s ERP systems.  Throughout the development process, Teradata engineers were embedded with SAP counterparts - and, according to Teradata, conducted in-depth reviews of the technical features (e.g., MPP architecture) that underpinned Teradata’s fast performance.  SAP engineers were also provided full access to Teradata’s products - though not to any underlying source code, it would seem.  Teradata Foundation was successfully piloted at one major customer facility, and Teradata claims that the prospective business opportunity was in the hundreds of millions of dollars annually.

As SAP worked with Teradata on the bridge project, it began developing its own database solution - SAP HANA.  In the summer of 2009, SAP announced its intention to revitalize its core offerings by providing a next-generation, in-memory database.  At the time, the investment HANA was primarily viewed as an attempt to sever SAP’s relationship on Oracle.  Oracle had long supplied the underlying database that backed SAP’s ERP offerings, and seemed to relish in eating into SAP’s core business - while simultaneously tightening its frenemy’s dependency on the Oracle database.  However, has HANA began to mature, it became clear that SAP’s aspiration wasn’t simply to escape from Oracle’s grip.  In May 2011, SAP announced that HANA’s architecture would enable it to serve analytics workflows in a first-class manner - eliminating the need for a separate data warehouse like Teradata.

Two months after flexing HANA’s features, SAP unilaterally terminated its partnership with Teradata.  In the following days, SAP unveiled a new version of its Business Warehouse product - powered by HANA, that was supposedly capable of servicing the complex analytical workflows that Teradata’s product had historically targeted.  Teradata was understandably alarmed; a valuable, reliable slice of their revenue was about to vanish - if their former partner had its way.  As HANA’s development ramped up over the next several years, the relationship between SAP and Teradata grew increasingly strained.

In 2015, German publication Der Spiegel dropped a bombshell report; it alleged that SAP’s internal auditors found SAP engineers misusing intellectual property from other companies, including Teradata.  Moreover, the audit specifically claimed that HANA’s development had improperly drawn on external IP.  Shortly after presenting their findings to SAP leadership, the auditors were fired.  It’s worth noting that one auditor tried to personally sue SAP, claiming that the company tried to suppress his findings - and asking for $25M in relief.  SAP denied any wrongdoing, countersued the auditor, and had the personal lawsuit dismissed.  (Another victory for the not-so-little guy.)

Der Spiegel’s report gave Teradata its call to arms.  The company assembled a lawsuit that asks for two-fold relief: an injunction on the sale of SAP’s HANA database, and a broader antitrust investigation into SAP’s move into the data warehouse market.  The first claim is pretty straightforward, if not a little wobbly: the report from Der Spiegel clearly indicates that SAP misappropriated Teradata’s IP.  Moreover, Teradata attests that it has evidence that SAP reverse-engineered its data warehouse, while the two were working as partners.  The wrinkle is that neither component of the claim provides a smoking gun; the Der Spiegel report doesn’t specify what was misappropriated, and Teradata hasn’t yet supplied evidence that the reverse-engineering occurred.

The second claim feels existentially motivated.  Teradata is understandably concerned that SAP will continue to make it more difficult for other data warehouses to work with its ERP systems, as it continues to develop and promote HANA.  A significant portion of the lawsuit is dedicated to describing SAP’s strength in the ERP market, and arguing that they are now using their power to anti-competitive ends.  60% of SAP’s customers plan to adopt HANA in the coming years, largely due to how it’s being bundled with ERP system upgrades.  Paired with SAP’s indication that it will only support HANA-powered ERP installations by 2025, Teradata sees its market opportunity dwindling. 

SAP has punched back, since the lawsuit was filed in June.  On the first claim, regarding misappropriated IP, they point to the disgruntled nature of the auditor who leaked the information - and who was denied $25M.  SAP attests that its own internal investigations surfaced no wrongdoing, and Teradata isn’t bringing forth specific evidence that suggests otherwise.  On the second claim, SAP has sidestepped any suggestions that it is locking down its ERP systems to work only with its HANA data warehouse.  Instead, they are painting a picture of natural competition, contending that Teradata is resentful that it hasn’t been able to compete in the evolving data warehouse market - and is looking to guarantee its historical marketshare.

Given the multi-year arc that’s typical of large technology lawsuits, we probably won’t see a verdict soon.  Teradata is shooting the moon, asking for both an injunction on the HANA product, and controls that will keep SAP from closing off interoperability with its traditional ERP systems.  Even partial success here could make a large difference to Teradata’s future.  SAP seems adamant to secure HANA’s future, and as past tussles with Oracle show, is clearly willing to engage in prolonged legal warfare.  Either way the case turns out, the verdict could prove meaningful for similar disputes that will inevitably appear down the line.

If you’re interested in reading through the lawsuit, you can check it out here

VisiCalc's Enduring Vision

Steve Jobs once quipped, "if VisiCalc hadn't debuted on the Apple II, you'd probably be talking to someone else".  Released in 1979, VisiCalc was the original electronic spreadsheet - and its widely recognized as the killer application that landed the personal computer on desks across Corporate America.

032c8bb0.jpg

These days, spreadsheet software seems about as interesting as plywood; it's the default way of "doing work" on a computer - and stereotypically mundane work, at that.  I personally can't remember ever using a PC that didn't have Excel or some other spreadsheet software installed on it.  I do remember learning Excel's basic functions on a mid-90s Mac, and saving my work to a floppy disk (which was ritualistically dismantled at the end of the school year).

VisiCalc was the original Excel; and its founding vision remains clear and compelling.  Dan Bricklin worked as an engineer at DEC in the early 70s, before heading to business school at Harvard.  Along the way, he grew frustrated with how rigid and cumbersome it was to perform calculations on a computer - especially if you needed to execute a lengthy series of steps.  At the time, programs would allow you to step through your work, one operation at a time; if you had to redo an earlier operation, tough luck: you needed to redo all of the earlier steps.  Even for complex engineering and financial workflows, it was the equivalent of working with a jumbo scientific calculator.

Bricklin imagined a virtual whiteboard, which would provide the user with tremendous power and flexibility.  Instead of being at the mercy of a simple sequential interface, you would have "a very sophisticated calculator, combined with a spatial navigation system akin to what you'd find in the cockpit of a fighter jet".  With operations split into individual cells within the virtual space, redoing work would simply involve gliding over to the appropriate cell, and making your change.  Critically, any changes would cascade across the entire file, automatically updating cells that depended on the modified value.

Bricklin decided to pursue the idea, using the business school's timesharing system to implement the original code.  After bootstrapping a first version himself, Bricklin recruited his college friend, Bob Frankston.  Frankston built out a production version of the "Visible Calculator", targeting the MOS 6502 microprocessor in the Apple II.  When the software debuted at the National Computer Conference in 1979, it quickly garnered attention from the well-established PC hobbyist community - and even more attention from the enterprise market.

At the time, PCs weren't a common sight within large corporations.  A handful of domain-specific applications existed for certain industries, but by in large, any computing work was done using large mainframes.  Ben Rosen, a prominent analyst at the time, saw VisiCalc's launch as a seminal moment; It was the first piece of personal computing software that could be utilized for broad categories of business problems, required no technical understanding outside of the program itself, and was priced affordably ($100).  Rosen's convictions were quickly validated by the market; by 1981, VisiCalc was arguably the primary reason that Corporate America was purchasing personal computers en masse.

02084c00.gif

VisiCalc transformed business workflows that had historically relied on the use of mainframes or pen-and-paper.  The most obvious advantage was speed; performing calculations by hand - pertaining to accounting, inventory planning, or myriad other business functions - was often tedious and error-prone.  VisiCalc introduced the formula system that's still a cornerstone of spreadsheet software today, allowing calculations to be intuitively specified.  Paired with the automatic recomputation, the difference was night and day.  Changing one variable in a complex forecast no longer required hours (or days) of manual recomputation; you could simply update a single cell in the spreadsheet, and watching the chain of formulas automatically refresh.

Beyond their commercial success, Bricklin's team deserves credit for introducing several high-minded computing concepts to the everyday user.  VisiCalc's formula system popularized an approach that would become known as "programming-by-example"; a new user could learn the software's core commands by simply tracing through the calculations in an existing spreadsheet.  This transparency opened the door to meaningful collaboration; instead of trading papers, analysts could work off of the same VisiCalc file, or synthesize results from different files.  For many, it was the first time doing any sort of computer-based work in a collaborative manner.

As the PC market grew in the early 80s, credible competitors began to come after VisiCalc's throne.  Bricklin and his team defended their dominant position for several years, before eventually losing ground to Lotus 1-2-3 (written by former VisiCalc employee, Mitch Kapor).  Lotus would enjoy success through much of the 80s, until Microsoft's Excel ascended to the top - a position that was massively fortified in the early 90s by the rise of Microsoft Windows.  

Forty years later, VisiCalc's legacy lives on in each spreadsheet created using Excel or Google Docs.  Every tier of the modern corporation runs on electronic spreadsheets - perhaps to an unsettling degree.  It's hard not to grimace at the large, macro-riddled Excel files that (barely) get sent via email  - and wonder if we're overdue for another leap forward.  While there are now countless applications for aggregating and analyzing data, none have managed to erode the ubiquity of the spreadsheet.  Those pursuing the next killer application would do well to learn from VisiCalc's clarity in vision, if they hope to build something that endures.

----

Images courtesy of bricklin.com

Recurrent Intelligence

Artificial Intelligence is in vogue, these days.  Whether it's Elon Musk tweeting ominously about the looming dangers of AI, or the daily gaggle of CNBC analysts discussing how AI will upend capitalism as we know it - you can hardly turn on a screen without seeing some reference to the impending, machine-god future.  It turns out that we've been here before, in some sense.

In What The Dormouse Said, John Markoff traces the early history of personal computing, and its curious intertwining with 60s counterculture.  Among the book's central characters is Douglas Engelbart, a computer scientist obsessed with what would come to be known as "Intelligence Augmentation".  The transistor was barely on the scene when Engelbart began writing about a humanity augmented by computing.  He believed that both the complexity and urgency of the problems facing the average worker were increasing at an exponential rate - and it was therefore critical to develop fundamentally new tools for the worker to wield.

At the time, this conviction wasn't widely shared by Engelbart's peers.  Developing computing tools that could amplify human abilities was seen as interesting engineering fodder; as a serious research focus, it was considered shortsighted.  Buoyed by the incredible advances made during the prior decade, artificial intelligence was the preeminent focus in computing research.  Computers had demonstrated the ability to solve algebraic word problems and prove geometric theorems, among myriad other tasks; wasn't it simply a matter of time before they could emulate complex aspects of human cognition?  

Engelbart had drawn significant inspiration from the writings of Vannevar Bush, the head of the U.S. Office of Scientific Research and Development during World War II.  Beyond his operational leadership during the war, Bush became famous for his instrumental role in establishing the National Science Foundation, and his musings on the future of science.  Having overseen the Manhattan Project and other wartime efforts, he grew increasingly wary of a future where science was pursued primarily for destructive purposes, rather than discovery.  Bush believed that avoiding such a future was contingent on humanity having a strong collective memory, and seamless access to the knowledge accumulated by prior generations.

In his most famous piece, As We May Think, Bush conceived of the "Memex", a personal device that could hold vast quantities of auditory and visual information.  He saw the pervasive usage of Memex-like devices as a necessary component of a functional collective memory.  Engelbart, like Bush, believed that the salient idea wasn't the ability to simply store and retrieve raw information; it was the ability to leverage relational and contextual data, which captured the hypotheses and logical pathways explored by others.  Engelbart extended this vision, imagining computing tools that would allow for both asynchronous and real-time communication with colleagues, atop the shared pool of information.

The computing community's focus on artificial intelligence throughout the late 50s and early 60s meant that Engelbart, with his fixation on intelligence augmentation, struggled to realize his vision.  Most of the relevant research dollars were flowing to rapidly growing AI labs across the country, within institutions like MIT and Stanford.  Engelbart worked for many years at the Stanford Research Institute (the non-AI lab), spending his days developing magnetic devices and electronic miniaturization, and his nights distilling his dreams into proposals.  In 1963, his persistence paid off; DARPA decided to fund Engelbart's elaborate vision, leading to the creation of the Augmentation Research Center (ARC).

The following years saw an explosion in creativity from the researchers at ARC, who produced early versions of the bitmapped screen, the mouse, hypertext, and more.  All of these prototypes were integrated pieces of the oN-Line System (NLS), a landmark attempt at a cohesive vision of intelligence augmentation.  In 1968, Engelbart's team showcased NLS in a session that's now known as the "Mother of All Demos".  The presentation is charmingly understated; Engelbart quietly drives through demonstrations of the mouse, collaborative document editing, video conferencing, and other capabilities that would become ubiquitous in the digital age.

From there, the future we know unfolded.  Xerox PARC built upon Engelbart's concepts, producing the Alto workstation - a PC prototype that sported a robust graphical user interface.  Steve Jobs would cite the Alto as one of Apple's seminal influences, prior to the creation of the Macintosh.  The Macintosh would become the first widely-available PC with a graphical user interface, motivating Microsoft (and others) to follow suit.  As the industry took shape, channeling the ethos of augmentation, Engelbart would see his convictions vindicated.  Alas, he would do so from the sidelines, growing increasingly obscure within academia while others generated unprecedented wealth and influence.

Despite significant progress in fields like machine vision and natural language processing, the enthusiasm around AI would wane by the mid-70s.  The post-war promise of machine intelligence was nothing short of revolutionary, and the hype had failed to deliver.  The American and British governments curtailed large swaths of funding, publicly chiding what they felt had been misguided investments.  In the estimation of one AI researcher, Hans Moravec, the "increasing web of exaggerations" had reached its logical conclusion.  The field would enter its first "AI winter", just as the personal computing industry was igniting.   

It's difficult to analogize the rise of personal computing; there is hardly an inch of our social, economic, or political fabric that hasn't been affected (if not upended) by the democratization of computational power.  While we don't necessarily view our smartphones, productivity suites, or social apps as encapsulations of augmentation - they are replete with the concepts put forth by Engelbart, Bush, and other pioneers.  Even so, some argue that the original vision of intelligence augmentation remains unfulfilled; we have seamless access to vast quantities of information, but has our ability to solve exigent problems improved commensurately?

Since the original winter, AI has continued to develop in cycles.  Suffice to say, we're in the midst of a boom; compounding advancements in commodity hardware, software for processing massive volumes of data, and algorithmic approaches have produced what's now estimated to be an $8B market for AI applications.  Media and marketing mania aside, there is basis for today's hype: organizations that sit atop immense troves of data, such as Facebook and Google, are utilizing methods like deep learning to identify faces in photos, quickly translate speech to text, and perform increasingly complex tasks with unprecedented precision. 

However, even the most sophisticated of these applications is an example of narrow AI; while impressive, it is categorically different than general AI - the sort of machine cognition that was heralded during Engelbart's time, which has yet to appear outside of science fiction.  Many of today's prominent AI researchers still consider general AI to be the ultimate prize.  DeepMind, a prominent research group acquired by Google, has stated that it will gladly work on narrow systems if they bring the group closer to its founding goal: building general intelligence.

Will AI become the dominant paradigm of the next 30 years, in the way that augmentation has been for the past 30?  Perhaps the question itself is needlessly dichotomous.  Computing has grown to occupy a central role in today's world; surely we possess the means to pursue fundamental breakthroughs in both augmentation and AI.  It's telling that Elon Musk, concerned about the unfettered development of AI, has also created Neuralink, a company aiming to push augmentation into the realm of brain-computer interfaces. 

The frontiers of both paradigms are expanding rapidly, with ever deepening investment from companies, governments, universities, and a prolific open source community.  As exciting as each is individually, it stretches the imagination to think about how the trajectories of Artificial Intelligence and Intelligence Augmentation might intertwine in the years ahead.

Buckle up.