Wednesday, December 14, 2011

Blog #23: User-Defined Motion Gestures for Mobile Interaction

Paper Title: User-Defined Motion Gestures for Mobile Interaction

Authors: Jaime Ruiz, Yang Li and Edward Lank

Author Bios:
Jaime Ruiz: is a 5th year doctoral student in HCI Lab in Cheriton School of Computer Science at the University of Waterloo. Her advisor is Dr. Edward Lank.

Yang Li: is a Senior Research Scientist at Google. Before joining Google's research team, Yang was a Research Associate in Computer Science & Engineering at the University of Washington and helped find the DUB a cross-campus HCI community.

Presentation Venue: CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems that took place at New York (ACM)

Summary:
Hypothesis: Although modern smartphones contain sensors to detect three-dimensional motion, there is a need for better understanding of best practices in motion-gesture design.
How the hypothesis was tested: The authors conducted a "guessability study" on participants by having them perform tasks and asking them what gesture/motion allows for the optimal mapping. For the experiment, users were told to treat the smartphone as a "magic black brick" because the authors removed all recognition technology from it so the users wouldn't possibly be influenced by anything of that nature. They were told to create gestures for performing tasks from scratch. The participants were recorded via audio and video. Data was also collected from a software on the phone for a "what was the user trying to do?" perspective.The study was conducted over all users who have previous and relevant smartphone experience
Results: Users most commonly used gestures not related to smartphone gestures. For example, viewing the homescreen for the smartphone had a very popular gesture as a result involving shaking the phone. Generally, the results gathered from the authors has a general agreement among gestures as well as the reasoning for the gestures. The authors therefore were allowed the luxury from the video "out-loud" process data to understand the user's thought process when creating the gesture. Tasks that were considered to be "opposite" of each other had similar gesture motions but were performed in "the opposite" direction.

Discussion:
Effectiveness: The authors wanted to create a way to have simple and optimal motion gestures to interact with a smartphone. What better way to do it than to let a set of smartphone users "create" such a set? It was an excellent idea from the authors to follow this. The paper was a good example of progress in the mobile interaction arena. The authors did achieve their goal, but it would have been interesting to follow up studies to verify their results.






No comments:

Post a Comment