Paper Title: Sensing Foot Gestures from the pocket
Authors: Jeremy Scott, David Dearman, Koji Tatani, and Khai N. Truong
Author Bios:
Jeremy Scott received his B.Sc., M.Sc. and PhD. in Pharamcology and Toxicology from the University of Western Ontario. He is currently an Assistant Professor and a Research Scientist at the University of Toronto.
Authors: Jeremy Scott, David Dearman, Koji Tatani, and Khai N. Truong
Author Bios:
Jeremy Scott received his B.Sc., M.Sc. and PhD. in Pharamcology and Toxicology from the University of Western Ontario. He is currently an Assistant Professor and a Research Scientist at the University of Toronto.
David Dearman is a Ph.D. student at the University of Toronto. His research attempts to bridge the fields of Human-Computer Interaction, Ubiquitous Computing and Mobile Computing.
Koji Tatani is a Ph.D. student at the University of Toronto. His research interests lie in Human-Computer Interaction and ubiquitous computing with an emphasis on hardware and sensing technologies.
Khai N. Truong is an Associate Professor in the Department of Computer Science at the University of Toronto. His research interests lie in Human-Computer Interaction and Ubiquitous Computing.
Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York.
Summary:
Hypothesis: In this paper the authors have provided a study of the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. This study is associated to controlling the mobile devices that are often located in users' pockets. The authors develop a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. The system that they built uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures.
The authors designed various gestures for the Study of Foot-Based Interaction Space:
Axis of Rotation:
- Dorsiflexion: this involved the rotation of ankle in such a way that the angle between the shin and the foot decreased
- Plantar flexion: this involved rotation of the ankle such that the above angle increased
- Heel rotation: this involved internal and external rotation of the foot and leg while pivoting the rotation on the toe of the foot
Selection Task:
The participants were asked to perform a target selection task with their dominant foot while standing.
How the hypothesis was tested: Six right-footed participants were recruited from the University of Toronto. They were asked to select targets presented on a laptop places on a table in front of them. The participants were instructed to move their foot back to the origin before they started the experiment. They were them prompted to hold down the left-click button on a wireless mouse to begin the trial. After this, a red line appeared at an angular target, and the user moved his/her foot to the target angle along the axis instructed by the system. When making a selection, no visual feedback was given in order to simulate an eyes-free interaction.
Result: Targets closer to the origin were selected more quickly than targets at the extremity of the range of selection. The selection time for the 10° target for dorsiflexion was significantly faster than the other targets. The median selection error across all targets was 11.77° for dorsiflexion, 6.31° for plantar flexion, 8.55° for toe rotation and 8.52° for heel rotation.
Discussion:
Effectiveness: Naive Bayes resulted in 82-92% classification accuracy for the gesture space that the authors suggested in the capabilities evaluation with the mobile device attached to the side of the user's leg. This showed that foot gestures were properly recognized by the device's integrated accelerometer. However, there are some factors that limit the accuracy of these gestures.
Reasons for being Interesting: The best advantage of this technology is that it does not demand any visual or cognitive attention from the user during its operation. On implementing the solution suggested by the authors for the real life obstrusive interactions that can interrupt this technology, this interface can greatly enhance user experience and lower accessibility barriers for the visually impaired. For e.g. a blind person can listen to music and without having to select the song, they can select different songs using the dorsiflexion gesture.
Faults: In the second study of the paper, the placement and posture of the mobile devices were fixed in the participants' pocket, which would not always be the case in a real-world setting. Also, it is very difficult to differentiate foot gestures from other activities like walking and running. However, for each of these faults, the authors have provided a valid implementable solution.
Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York.
Summary:
Hypothesis: In this paper the authors have provided a study of the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. This study is associated to controlling the mobile devices that are often located in users' pockets. The authors develop a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. The system that they built uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures.
The authors designed various gestures for the Study of Foot-Based Interaction Space:
Axis of Rotation:
- Dorsiflexion: this involved the rotation of ankle in such a way that the angle between the shin and the foot decreased
- Plantar flexion: this involved rotation of the ankle such that the above angle increased
- Heel rotation: this involved internal and external rotation of the foot and leg while pivoting the rotation on the toe of the foot
Selection Task:
The participants were asked to perform a target selection task with their dominant foot while standing.
How the hypothesis was tested: Six right-footed participants were recruited from the University of Toronto. They were asked to select targets presented on a laptop places on a table in front of them. The participants were instructed to move their foot back to the origin before they started the experiment. They were them prompted to hold down the left-click button on a wireless mouse to begin the trial. After this, a red line appeared at an angular target, and the user moved his/her foot to the target angle along the axis instructed by the system. When making a selection, no visual feedback was given in order to simulate an eyes-free interaction.
Result: Targets closer to the origin were selected more quickly than targets at the extremity of the range of selection. The selection time for the 10° target for dorsiflexion was significantly faster than the other targets. The median selection error across all targets was 11.77° for dorsiflexion, 6.31° for plantar flexion, 8.55° for toe rotation and 8.52° for heel rotation.
Discussion:
Effectiveness: Naive Bayes resulted in 82-92% classification accuracy for the gesture space that the authors suggested in the capabilities evaluation with the mobile device attached to the side of the user's leg. This showed that foot gestures were properly recognized by the device's integrated accelerometer. However, there are some factors that limit the accuracy of these gestures.
Reasons for being Interesting: The best advantage of this technology is that it does not demand any visual or cognitive attention from the user during its operation. On implementing the solution suggested by the authors for the real life obstrusive interactions that can interrupt this technology, this interface can greatly enhance user experience and lower accessibility barriers for the visually impaired. For e.g. a blind person can listen to music and without having to select the song, they can select different songs using the dorsiflexion gesture.
Faults: In the second study of the paper, the placement and posture of the mobile devices were fixed in the participants' pocket, which would not always be the case in a real-world setting. Also, it is very difficult to differentiate foot gestures from other activities like walking and running. However, for each of these faults, the authors have provided a valid implementable solution.
No comments:
Post a Comment