Wednesday, February 27, 2013

Hopefully a semi-unique approach to user interfaces

If advertisements along Boston's Red Line are any indication, smartdevices are moving towards ever larger screens for users to interact with their media.  One distinct challenge of these larger screens is allowing consumers to still comfortably use their device on a regular basis, ideally with out using two hands all the time. As consumers we are hyper caffeinated and overstimulated it would be a real shame if we had to put our coffee down to answer a phone call.  One solution, as put forth by Apple is to simply restrict the width of their phones to the width of a 90th percentile human female (ok I'm assuming here, I remember reading that their phone width was partially dictated by ease of single thumb use I just don't know the specifics), and as far as I know this is the only real solution to the problem that I know of off hand.  What I would propose, as we are already embedding more and more sensing technology into our smart phones, would be to take advantage of this expanded awareness to allow the phone to detect how it is being held.  so no matter how you held your phone it would know how you wanted to use it, and adjust its default interface accordingly.
Figure 1, Phone Held in Right Hand
For example if I am holding my phone to make a call in my right hand, the number pad would re-position itself so that at no point would I need to adjust my grip to reach the number 1. (As shown in Figure 1.)  Ideally whatever sensing technology I was using would be able to detect the dimensions of my individual fingers, allowing for truly dynamic sizing and positioning of the interface space.  Alternatively (I don't have a drawing, I try to limit my crappy art to 4 per day) if I were to interface with my hand gripping at a higher position, the interface space would move accordingly.
Figure 2 Horizontal Positioning and Full Tablet
Placing the phone in a horizontal position with both hands gripping the back would create split input regions designed to accommodate the thumb size of whatever user is holding the device.  Junior and Senior can both use the same tablet and feel confident that their fingers will find the buttons they are looking for.  A potential benefit I could see if technologies like this were to become ubiquitous would be more consistent user experiences across platforms sizes, while the display screen might change size the sensation of input in all but extreme sizes would feel roughly the same, allowing for people to have much more efficient interface with technology.  The final interface area is putting the device on a table or a docking station, at which point the entire surface becomes a viable input.  I should comment that I would be extremely surprised if people wanted a user interface as cut and dry as described here, the necessary fine tuning and wow factor of an interface like this would come from determining how much resistance is experienced before other parts of the screen become active as the user grips the device.


As far as I know the precise approach I am putting forth is relatively unique (I only did about 30 minutes of skimming patents and research documents, but nothing really popped out)  Microsoft did a research paper on interfaces designed to flexibly interact with either a single thumb or four fingers. Here is one very broad patent that basically says that you can have a touchscreen where parameters change according to other parameters (oh yeah that seems legit(ok I haven't really read it in depth enough to really comment, I just have issues with software based stuff)).  This patent deals with adjusting the displayed interface on user proximity, (90% sure a semi decent patent attorney could show the difference between approaches, and hell I'm not going to try making money off of this idea)
Ahah, after spending too much time looking into it, I found that DoCoMo of Japan is doing some grip interface research in smartphones.  Another dude wanted to just use a straight up different input methodology, regardless of external sensors.  Yet another alternative UI, no adaptive properties right now.

So long story short, I am probably not making a truly unique proposal, but it was fun to get the idea out there, and at least I can honestly say that I only found the individual elements of what I am suggesting, but no one showed up in the hour of research I did.  Please don't get mad if I am wrong.

Follow Up March 20, 2016:  I have been meaning to post this follow up for a while, itty bitty radar chips could allow smart devices of the future to implement this proposed adaptive user interface.

No comments:

Post a Comment