The Evolution of Interaction
From Machine Language to Natural Language: A Personal Journey Through the Evolution of Human–Computer Interaction
As a technologist by training, I’ve had a front-row seat to the extraordinary transformation in how humans communicate with machines. It has been a remarkable journey, from the painstaking days of programming in machine language to the fluid, intuitive interactions we experience today.
After graduating from electrical engineering in the 90s, my first job was an engineer developing and testing C and C++ compilers on Mainframe which was one of the most in-depth and complex fields mapping the complexity of human intentions into rigid machine instructions. During that time, we had to program in low level machine language, sometimes even writing in binary code, just to communicate with computers and get them to perform tasks. I could picture how read operations move through memory and how write operations change it byte by byte. I could even debug complex triple pointers by mentally mapping the binary ones and zeros, like the streams of code in The Matrix.
In many ways, these early years of computing were like learning a second language. Every command required precise syntax, correct spelling, and a clear understanding of how machines read instructions. There was no room for ambiguity. We could only communicate in ways the machine understood, so most of the burden fell on the programmers. This was the earliest form of “prompt engineering”, carefully writing each line of code so the machine would interpret it correctly.
Graphical user interfaces (GUIs) like Windows 3.0 and the Apple Macintosh were a revolutionary step forward, as they allowed users to avoid command-line prompts and interact more intuitively with their computers with a mouse. Suddenly, we didn’t need to type cryptic commands into a terminal just to get things done. GUIs enabled us to interact with machines visually and intuitively.
In 2007, the iPhone ushered in a new era where interaction became even more natural. Suddenly, the need to memorize commands or navigate complex menus diminished. A user no longer needs to have technical expertise to engage with their device. However, there was still a need to understand which apps to use and how to configure settings, as we adapted our personal technology ecosystems to fit our lives.
With the rise of artificial intelligence, the paradigm has shifted yet again. Today, with AI-powered systems like ChatGPT, we can express our desires in plain language, English or any other , and the system interprets, learns, and responds in ways that would have been impossible just a decade ago. AI has become the ultimate abstraction layer between humans and machines, enabling us to skip over the technical intricacies of underlying technologies, apps, or operating systems. Now, the human-machine relationship has flipped; for the first time, humans no longer need to adapt to machines. The machine adapts to us. AI understands the nuances of human language, our intentions, and even our occasional failures to express ourselves clearly. The need to translate our thoughts into “machine speak” is rapidly disappearing, and this shift marks a fundamental evolution in how we engage with technology.
The Implications of a New AI-Native User Interface
We are again on the cusp of another fundamental evolution of a new user interface: the advent of a fully AI-native user interface , one where human language itself becomes the programming interface. An AI-native interface could eliminate the need for traditional apps, or even operating systems, as we currently understand them. We no longer need to tell the system what app to open or how to execute commands. Instead, the interface could operate through language, with real-time feedback and response shaping our interaction as it goes.
The future is not just about talking to machines. It is about interacting with them in more natural ways. With AI and machine vision, devices can see what we see and read our gestures, facial expressions, and surroundings. You might wave your hand to clear a notification, use your eyes to scroll, or move your fingers to control a virtual object in augmented reality. This is often called spatial computing. It blends the physical and digital worlds so interacting with technology feels more like moving through real space than tapping on a screen. Instead of typing and clicking, you simply look, gesture, and move. From running your smart home to navigating 3D worlds, you may never need to touch a screen again.
This shift creates interesting investment opportunities: a new kind of interface could emerge, one driven less by apps and more by AI, language, and even body movement. As interfaces become AI native, the line between how humans express themselves and how machines execute tasks starts to blur. Our words and movements become the gateway to the digital world. It is a major leap in computing and a powerful opportunity to invest in the next wave of technology.
A Future of AI-Native Interfaces
The future of user interfaces is simple. Technology should adapt to us, not the other way around. As AI gets better at understanding our words, gestures, and intent, the gap between human thinking and machine logic starts to close.
From a founder’s perspective, this changes what we build. From a technologist’s perspective, it changes how systems are designed. And from a VC perspective, it opens a new platform shift. When the interface changes, entire ecosystems are rebuilt.
We are still early. The winners will not just add AI to old products. They will rethink the experience from the ground up. If we get this right, technology will feel less like a tool and more like a natural extension of how we think and create.


