Paper Title: Gesture Search: A Tool for Fast Mobile Data Access
Author: Yang Li
Author Bio: Yang Li is a Senior Research Scientist at Google. Earlier, he was a Research Associate in Computer Science and Engineering at the University of Washington and helped find the DUB (Design:Use:Build) across the HCI community on the campus.
Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York.
Summary: Hypothesis: In this paper, the author presents Gesture Search which quickly accesses the users' mobile phone data, such as applications and contacts, by drawing gestures. This Gesture Search demonstrates a novel way for coupling gestures with standard GUI interaction for e.g. by drawing gestures on the touch screen of the cell phone. Its intention is to provide a quick and a less stressful way for users to access mobile data.
How the hypothesis was tested: He begins the demonstration of gesture search by providing an example in his paper. He gives the example of a user wanting to call a friend whose name started with 'A'. In order to search her number, he drew the letter A in the screen, which automatically started a search processes as soon as the gesture was finished. In order to test their software, they made Gesture Search downloadable through a company's internal website and asked Android phone users in the company to test it out. The users were not given any instructions for using the Gesture Search, instead they took their surveys at the end.
Result of the Hypothesis: From the randomly selected 125 users who passed the two criteria's set by the team, they collected 5497 search sessions in which a user drew a gesture query and selected a search result. They found that 61% of the collected sessions (5497) only used a single gesture whereas 82% of them used two gestures or less.
Discussion:
Effectiveness: I believe the Gesture Search feature is an excellent tool in order to access and even manage the data on a device quickly and efficiently. I believe that if a person would get used to the functions of the tool for tasks where it can be used most efficiently, it would be able to save a lot of time and effort of the user.
Reasons for being Interesting: I found this technology very similar to the voice recognition feature found in the cell phone or a computer. Instead of typing out your friends first letter of the first name, one can simply scribble that alphabet and it would automatically start the search from the phone book. I like how it saves every one second it would always require a user to go to the search menu of the phonebook and then locate the alphabet and then press it.
Faults: This feature, if used extensively, can take a user more time than the usual method would otherwise take. For e.g. if a user decides to draw a "C" on the device every time he/she wants to enable the camera, it would take more effort and time compared to pressing a single button on the phone whose function is to start the camera in the device. Besides, in order to draw something on the screen, one has to use both hands (holding the device with one and doing the gesture with the other). This can prove to be ineffective for some situations.
Author: Yang Li
Author Bio: Yang Li is a Senior Research Scientist at Google. Earlier, he was a Research Associate in Computer Science and Engineering at the University of Washington and helped find the DUB (Design:Use:Build) across the HCI community on the campus.
Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York.
Summary: Hypothesis: In this paper, the author presents Gesture Search which quickly accesses the users' mobile phone data, such as applications and contacts, by drawing gestures. This Gesture Search demonstrates a novel way for coupling gestures with standard GUI interaction for e.g. by drawing gestures on the touch screen of the cell phone. Its intention is to provide a quick and a less stressful way for users to access mobile data.
How the hypothesis was tested: He begins the demonstration of gesture search by providing an example in his paper. He gives the example of a user wanting to call a friend whose name started with 'A'. In order to search her number, he drew the letter A in the screen, which automatically started a search processes as soon as the gesture was finished. In order to test their software, they made Gesture Search downloadable through a company's internal website and asked Android phone users in the company to test it out. The users were not given any instructions for using the Gesture Search, instead they took their surveys at the end.
Result of the Hypothesis: From the randomly selected 125 users who passed the two criteria's set by the team, they collected 5497 search sessions in which a user drew a gesture query and selected a search result. They found that 61% of the collected sessions (5497) only used a single gesture whereas 82% of them used two gestures or less.
Discussion:
Effectiveness: I believe the Gesture Search feature is an excellent tool in order to access and even manage the data on a device quickly and efficiently. I believe that if a person would get used to the functions of the tool for tasks where it can be used most efficiently, it would be able to save a lot of time and effort of the user.
Reasons for being Interesting: I found this technology very similar to the voice recognition feature found in the cell phone or a computer. Instead of typing out your friends first letter of the first name, one can simply scribble that alphabet and it would automatically start the search from the phone book. I like how it saves every one second it would always require a user to go to the search menu of the phonebook and then locate the alphabet and then press it.
Faults: This feature, if used extensively, can take a user more time than the usual method would otherwise take. For e.g. if a user decides to draw a "C" on the device every time he/she wants to enable the camera, it would take more effort and time compared to pressing a single button on the phone whose function is to start the camera in the device. Besides, in order to draw something on the screen, one has to use both hands (holding the device with one and doing the gesture with the other). This can prove to be ineffective for some situations.
No comments:
Post a Comment