Human-computer interfaces, where we're at in 2020, and why it matters

#archive

This post is out of date

This post has been migrated from my old blog. It may have broken links, or missing content.

He could correct the computer’s data, instruct the machine via flow diagrams, and in general interact with it very much as he would with another engineer, except that the “other engineer” would be a precise draftsman, a lightning calculator, a mnemonic wizard, and many other valuable partners all in one.

J.C.R, Licklider, Man-Computer Symbiosis, 1960

🎓 What is it?

**Human-computer interfaces** are the tools that enable us to do creative work and interact with our computers, like:

  • The mouse and keyboard.
  • Touch screens.
  • The Wii remote.
  • Your voice.
  • Virtual reality.
  • Your brain?

⌛ The past

Human-computer interface research is closely intertwined with the evolution of computers themselves.

“Computing’s Johnny Appleseed”, **J.C.R Licklider**, wrote in 1960 that the thing holding technologists back was interactivity:

The department of data processing that seems least advanced, in so far as the requirements of man-computer symbiosis are concerned, is the one that deals with input and output equipment or, as it is seen from the human operator’s point of view, displays and controls.

At that time, probably the most advanced method of interactivity was the light gun (think Duck Hunt), pictured below in use at SAGE—the Semi-Automatic Ground Environment computer.

Side note: SAGE was the brainchild of **Jay Forrester**, who would later found the field of **system dynamics**, which explores how to use simulations to model and understand the behavior of complex systems. More on him in a future newsletter 😊

**Douglas Engelbart** presented “The Mother of All Demos” in December 1968. Computer graphics, word processing (including real-time collaboration), and the first computer mouse were all in display during the demo, but in a way that was genuinely transformative: as Bret Victor puts it in his tribute to Douglas Englebart, things like multiple cursors and screen sharing were radically different in inspiration than what we have today:

Engelbart’s vision, from the beginning, was collaborative. His vision was people working together in a **shared intellectual space**. His entire system was designed around that intent.
If you attempt to make sense of Engelbart’s design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart’s intent. Engelbart **hated** our present-day systems.

https://www.mercurynews.com/wp-content/uploads/2018/12/MOAD-Doug-GUI-1280x540.jpg?w=1280

Englebart’s Augmentation Research Center, or **ARC**, was a laboratory exploring the potential for information processing through computing, funded by J.C.R Licklider through ARPA, the Advanced Research Projects Agency, in the early 1960s.

**Alan Kay** (also the creator of what we call object-oriented programming today) introduced the concept of “desktop computing” at **Xerox PARC** in 1970. Users could use windows, icons, menus and a pointer to interact with the computer. This was one of the first public examples of a GUI, or graphic user interface.

After the popularization of the mouse in the early 1980s via the **Apple Lisa**, the mouse (and the desktop) became the predominant mode of interaction for computers throughout the rest of the 20th century.

📌 Right now

It’s important to contextualize **now** in the timeline of human-computer interface.

We’re currently living through what is probably the second great transformative moment in the history of computer interactivity: **touch screens**.

**Tap**. **Tap and hold.** **Peek**. **Pop.** **Pinch to zoom.** These patterns didn’t exist in popular consciousness before 2007—the release of the first iPhone. Touch has, for many people, become the primary metaphor for how we interact with computers (in particular, **portable** ones) in the last 15 years.

📈 What’s next?

There are many new, interesting ways to interact with computers that have popped up in the last decade.

**Virtual reality** has become commonplace thanks to **Oculus**.

**Brain sensors**, like the one produced by **Neurasky**, are exploring how we can interact with programs without using any sort of gesturing or motor movement at all.

**Voice technology** has become ubiquitous—it’s simple, and often can’t be harnessed for more than simple tasks. The systems for **parsing** speech are mostly set in place, but **comprehension** remains an unsolved problem.

There’s still a great landscape of work to be done in **collaborative** computing, too.

**Dynamicland** is **Bret Victor’s** research group in the spirit of Douglas Englebart and Xerox PARC, exploring a physical and collaborative model of research and information processing.

🤔 Why it matters

**The work that we do as technologists is constrained by the tools that we have for building and working with computers.**

As I continue to explore topics in this newsletter around software, whether that’s modern tooling like JavaScript frameworks and build systems, or older technology like file systems and databases, I’ll be framing it with an eye towards the work of Doug Englebart and other early human-computer interface pioneers.

How does this technology help augment human intellect? Does this technology make the world better? Does it make the world more equitable? It’s probably a bit more ambitious than the average newsletter, but if we’re going to think and look ahead at the next sixty years of technology, we should approach it with that perspective.

🙋 Who to know

People

Companies