[This piece, written by Andrew Hsu (left), technology strategist at Synaptics, is the second in a series of posts about cutting-edge areas of innovation. The series is sponsored by Microsoft. Microsoft authors will participate, as will other outside experts.]
The mobile phone has evolved beyond its humble voice communication origins and now commands serious attention as the computing platform for the masses. Most people still prefer their phone to fit comfortably in their pocket, however. This places extreme mechanical design limitations on our mobile phones.
The phone being a portal into the cloud (or more archaically, a client in the client-server nomenclature) reduces the actual computing requirements needed in the phone. Additionally, Moore’s Law has been instrumental in improving computation power in handsets. Nearly all data-capable cell phones employ a dedicated applications processor just to run all this new software.
What may be surprising, however, is the recent revolution in the user interface for the mobile phone to enable cloud computing. Early attempts at cloud computing in a mobile phone (think early WAP-enabled phones) relied on traditional keypads and small, text-focused displays to “bring the power of the Internet” into a phone. Unfortunately, the user experience was underwhelming. While such text-based interaction was acceptable in places like Japan (as popularized by NTT DoCoMo), users outside of Japan were unwilling to accept such an arcane interface or limited applications.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
So, in addition to more (and cheaper) computing power, the real revolution was the introduction of a more intuitive user interface. This includes larger, higher resolution AM-OLED WXVGA displays and better user controls. Once these large displays were introduced, consuming the entire surface of a mobile phone, a natural question to ask was: “Where do you put the buttons without growing the phone”?
Certainly, slide-out mechanisms enable phones to hide the keypad (and, in some cases, keyboard), but the introduction of the touchscreen (as popularized by the iPhone and HTC G1) enables users to interact directly with the display and effectively eliminate all other hardware input controls. The adoption of non-stylus capable touchscreens on the iPhone, G1, and others — while initially perceived to be a limitation — quite possibly triggered the revolution for using a mobile phone to access the Cloud.
Touch-only capacitive touchscreens forced the need to think very hard about how to best design an intuitive user interface, resulting in breathtakingly innovative software. This not only enabled the adoption of capacitive touchscreen technology into handsets, but more importantly enabled the mobile phones to ascend to its role of portal into the Cloud.
But does this mean there’s no need for better user interface hardware? Absolutely not. Newer touchscreen features such as proximity and force sensing, in addition to other sensing modalities like grip sensing, will continue to enable a handset to gain an even better sense of its surrounding environment and user intent. Certainly there will be a huge software exercise to ensure that all the new input controls work harmoniously and in an intuitive fashion. However, the relevance of the user interface hardware, which has already enabled the migration from multi-tap to multi-touch, has never been more prominent.
Be sure to see the previous article in this series:
Put your finger on it: The future of interactive technology