Have you ever wondered who is behind Google Glass? The story of Glass, an eyepiece designed to put Android in your eyeballs, has been told. So has the story of what’s in Glass. It’s an expensive, high tech gadget and no one (including staff at Google Ventures) really knows what it’s for. Maybe some clues lie with the people working on Project Glass.
Mark Spitzer, Director of Operations at Google X
To develop and commercialize Glass, Google acquired several patents of The MicroOptical Corporation. Spitzer founded MicroOptical in 1995 and served as the CEO before the company was rebranded MyVu Corporation in 2007. Spitzer stayed on as CTO, but MyVu didn’t survive the 2008 recession.
Google Glass bears a striking resemblance to MicroOptical’s systems for the military. According to Spitzer, MicroOptical designed and delivered unique head-mounted display systems to the US military; established a multi-year relationship with an ophthalmics company; designed and delivered commercial display systems for industrial applications; and made consumer video display products sold in US retail stores.
Tom Chi, User Experience Team Lead at Google X
In several public talks, Chi claims to have prototyped Google Glass in very short order. Maybe if his team had spent more time considering the user experience, consumers would have a better idea of what Glass is supposed to be for.
Prior to heading up the Glass user experience team, the only interfaces Chi worked on were optimized for desktop systems, such as Microsoft Outlook and Yahoo Search. But putting an interface in the eye is a fundamentally different experience for the user. Chi’s approach was to regard the eye as if he was developing an interface for desktop or handheld devices.
Adrian Wong, Professional Daydreamer at Google X
A self-described daydreamer, Wong worked on classified projects at Sandia National Laboratories for more than five years before joining Google X in 2011. He worked as Glass technical lead for the main PCB and display electronics subsystems.
Wong has also contributed to several issued patents that hint where Glass may be headed: Unlocking a screen using eye tracking information, which suggests the user will launch certain screens or functions by reading text on future Glass products; Displaying sound indications on a wearable computing system; and Ad hoc sensor arrays.
Anurag Gupta, Google Glass Optics Lead at Google X
Gupta is responsible for all current and future optical light engine architectures, metrology tools, optical engine as well as individual components for Glass. He manages the optics team and has filed more than 15 patents (although several of these name his prior employers, Hewlett-Packard and Optical Research Associates (Synopsys) as assignees, rather than Google.
Mat Balez, Glass Senior Product Manager at Google X
Balez worked on the core device interaction model, the voice command experience, camera features, communication features (SMS, phone calls, email), as well as the overall system UI. Previously, he was project manager for Google Maps for mobile, Google Latitude and Google+ Local. His experience at Google gives a good indication of the priorities for Glass.
Steve Lee, Project Glass Director at Google X
Prior to leading product management for Google X, Lee worked on software project management for social media development. Given his directorship role, it is notable that he has no prior experience in hardware or interface development for products that project images in a heads-up display such as Glass.
Stephen Lau, Senior Software Engineer & Technical Lead at Google X
An expert in Android, Lau’s role in project Glass is a clear indication that Google intends the software to be based on its existing platform.
Looking at the people who developed Glass, and their prior experience, it becomes clear that the product was rather hastily conceived and launched as way to bring Spitzer’s military products to consumers using Android as a platform. This may have been a huge missed opportunity by Google to thoroughly think through the fundamental differences between operating a handheld or desktop system and operating a system that interfaces primarily with the eye.
Google is clearly focusing on Android, which is all about hands and voice. But a system that interfaces with the eye should not require the use of hands and voice. Can Android be retooled to empower the user in a hands-free voice-free way? Unfortunately, the present iteration of Glass available to developers doesn’t bode well for revolutionizing mobile devices.