Mixed reality and AI bots: the new interaction paradigm

GUI—how it all started

Popularized by computer systems like the Apple Macintosh 128K launched in 1984 and Microsoft’s Windows 95 operating system launched in 1995, which was also one of the most popular desktop operating system at the time, the graphical user interface ( or GUI) involves graphical elements such as windows, radio buttons, check boxes, and harnesses a pointing device—the good old mouse that we are not so accustomed to—for navigation across the UI system. GUI was pioneered by Xerox’s PARC (Palo Alto Research Center), with many ideas making their way into some of the more popular mainstream desktop software products from Apple, Microsoft, IBM and the likes.

GUI has come a long way, and today, it is the de facto way of interacting with computers for creators and consumers alike. Over time, the human–computer interaction has evolved to be more intuitive and natural, with the advent of touch- and voice-based interactions, yet a majority of the professional tools for creators are still fixated on the GUI interface, and more traditional interaction technics.


The evolution of more natural interactions

When Steve Jobs introduced to the World what was the first-ever iPhone—a device with touch as the primary medium of interaction between the user and the computing device, it was one of the most path-breaking and unconventional mediums of interaction. But today, touchscreens are as ubiquitous a medium of human–computer interaction as, probably, GUI.

Over the past decade, rapid advancement in technology has enabled new ways of interacting with devices, and the computing devices themselves have shrunken exponentially. Touch-based interactions involve a near-zero learning curve, and are the most intuitive. Today, we can just use voice commands to perform simple tasks. Yet, we still feel discordant and disconnected from reality, and our devices still lack the nature and warmth of a human companion.


Mixed reality—getting lost in a Galaxy far, far away

The prospect of putting on a pair of glasses, and immersing in a World far from reality seems so mystical and exciting. However, mixed reality—an amalgamation of augmented and virtual worlds with reality—has always been more of a niche technology, reserved for gamers and enthusiasts, and hasn’t been in mainstream usage as a primary (or secondary) medium of human–computer interaction.

Over time, technology has evolved to a point that mixed reality is capable of much more than just giving glimpses into virtual Worlds, and can be used to actually control and transform the way we interact with computers. Mixed reality has opened up a plethora of opportunities for HCI.

Companies like Oculus and Microsoft are using mixed reality to completely reimagine the HCI paradigm, and revolutionize how we interact with computers and smart devices. While Oculus is using VR to create new ways of engaging with content, making it more immersive and exciting, Microsoft is using HoloLens to turn Windows into the OS of the future—with holographs and augmented content.


Artificial intelligence and bots—making devices smarter and more human

At the Build 2016 conference, Microsoft announced the Microsoft Bot Framework, which uses deep learning and artificial intelligence to allow people and organizations to create bots. Chatbots bring to the fore smarter and automated ways for businesses to interact with customers, and allow apps to smartly communicate with one another and enable new ways to do things.

Fusing the deep learning technology of AI bots into apps and giving a personality to personal assistants like Siri and Cortana not only make interacting with machines more natural, they give the human touch to our devices and venture into a new paradigm of smarter and more humanoid devices.

Looking forward, it would be interesting to observe how forward-thinking companies and startups fuse artificial intelligence, mixed reality and the Internet of Things to create new and unprecedented ways of human–computer interaction, and enable unforeseen possibilities with machines. For example, IoT devices like Nest could someday communicate with autonomous and intelligent cars like Tesla to heat or cool the house to just the right temperature, and smarter Keurigs can start brewing your coffee, so that when you arrive, everything’s just right.