Monday, October 17, 2011

Blog #20: The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures

Paper Title: The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures


Authors: Jacob O. Wobbrock, Leah Findlater, Darren Gergle and James J. Higgins


Authors Bio:
Jacob O. Wobbrock is an Associate Professor in the Information School and an Adjunct Associate Professor in the Department of Computer Science & Engineering at the University of Washington. He works in the field of HCI, combining computer science, interaction design and psychology to investigate novel user interface technologies, input and interaction techniques, human performance with computing systems, and accessible, mobile & surface computing interfaces.



Leah Findlater is a postdoctoral researcher in The Information School, working with Dr. Jacob Wobbrock. Her research interests include personalization, accessibility, and information and communication technologies for development (ICTD). She is also a member of the AIM Research Group.


Darren Gergle is an Associate Professor in the departments of Communication Studies and Electrical Engineering & Computer Science at Northwestern University. He also directs the CollabLab: The Laboratory for Collaborative Technology. His research is in the fields of Human-Computer Interaction (HCI), Computer-Supported Cooperative Word (CSCW) and Computer-Mediated Communication (CMC) with an interest in developing a theoretical understanding of the role that visual information plays in supporting communication and small group interactions.


James J. Higgins is a Professor at the Kansas State University in the Department of Statistics. 




Presentation Venue: CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems that took place at New York (ACM)


Summary:
Hypothesis: The authors of this paper present the Aligned Rank Transform for nonparametric analysis of factorial experiments using the familiar F-test. The ART offers advantages over more complex methods in its simplicity and usability. The authors offered the first generalized mathematics for an N-way ART and programs called ARTool and ARTweb to make alignment and ranking east. By providing three examples of published data re-examined using the ART, they exhibited its benefits.
The authors provide an example of Conover and Iman's Rank Transform (RT) that applies ranks, averaged in the case of ties, over a data set, and then uses the parametric F-test on the ranks, resulting in a nonparametric factorial procedure. They mention how it was discovered that this process produces inaccurate results for interaction effects and then provide a solution for it using the ART.
How the hypothesis was tested: The authors provide the steps of the ART procedure in five steps described below:
Step 1: This step shows how the residual is computed for each raw response Y


residual = Y - cell mean


Step 2: This step shows how the authors computed estimated effects for all main and interaction effects. They show multiple ways to estimate the main effect for a factor A with response Yi: one-way, two-way, three-way, four-way and ultimately N-way effects.


Step 3: This step shows how the authors compute the aligned response Y'


Step 4: This step assigns averaged ranks to a column of aligned observations Y' to create Y''


Step 5: This step shows how to perform a full-factorial ANOVA on Y''.


The authors also built the ARTool and ARTweb in order to make alignment and ranking easy, and to make sure no assumption of data is made in long-format respectively.


Discussion:
Effectiveness: This is a great method built by the authors in order to perform nonparametric analysis of factorial experiments using the familiar F-test. It is an excellent tool that can be used in almost any real time systems to make alignment and ranking easy and accurate.
Faults: The ART has some limitations. For data exhibiting very high proportions of ties, the ART simply replaces those ties with tied ranks. If data exhibits extreme skew, the ART will reduce that skew which may be undesirable if distributions are meaningful.

Blog #19: Reflexivity in Digital Anthropology

Paper Title: Reflexivity in Digital Anthropology


Authors: Jennifer A. Rode


Authors Bio: 
Jennifer A. Rode is an Assistant Professor at Drexel's School of Information, as well as a fellow in Digital Anthropology, at University College London. In her dissertation research, she used ethnographic approaches to create grounded theory that examined gender and domestic end-user programming for computer security. Her work discusses how the relationship between technology and identity is negotiated, especially gendered identity and the presentation of an individual's technical ability.


Presentation Venue: CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems that took place at New York (ACM)


Summary:
Hypothesis: In this paper, the author overviews the key aspects of ethnography and its use in HCI, as well as in the anthropological approach. It relates the participant observation practices to participatory design and the socio-technical gap, and the ways ethnography can address them.
The author establishes what anthropologists can contribute to HCI by writing reflexive ethnographies. The people doing this in the digital space are called digital anthropologists, and they write digital ethnographies. 
The author proposes a twofold purpose for this research:
1) The HCI community needs to differentiate the forms and variations of ethnography that may be relevant for the field of anthropology.

2) She proposes how this can lead to CHI benefiting from anthropological studies
How the hypothesis was tested: This paper consisted of the author proposing different methodst structure participant observant studies and included examples of the same performed in the past. No experiments were performed to prove the hypothesis.
Methods: The author discusses the concept of Reflexivity in this paper based on its methodological differences from positivist approaches, which culminates in orientations to the production of scientific knowledge. She illustrates in her paper how Anthropological Ethnographic's reflexivity contributes to design and theory in HCI. She describes three forms of anthropological writing. Then she explains key elements of its technique. Then finally she discusses where ethnography is used in the design process in CHI so that she can highlight how Digital Ethnography can contribute.


The author provides a detailed classification of the Styles of Ethnographic Writing in her paper:
Realist: The realist ethnography is the dominant form of ethnographic text within HCI. She talkes about Van Maanen's theory and  tradition of ethnography.
Confessional: She discusses the more reflexive approach, the confessional ethnography approach, which broadly provides a written form for the ethnographer to engage with the nagging doubts surrounding the study and discuss them textually.
Impressionistic: Another approach that shows promise for HCI is Van Maanen's impressionistic ethnography, also has reflexive roots. The author discusses further about impressionistic ethnography and states examples of where it can be used.




Discussion:
Effectiveness: The paper gives a great insight in understanding the real world appropriation of technology and how it is situated within social conventions and also about how the realities of daily life is a vital part of the design. The author's intention from this paper is to propose a solution for the debates that are being raised by experimenting with the form of ethnographic text - the experimental, interpretive, dialogical and polyphonic. The paper includes a cluster of examples from studies in the past, and it suggests a valid solution to existing ethnography and anthropology related dilemmas.

Saturday, October 15, 2011

Blog #18: Biofeedback game design: using direct and indirect physiological control to enhance game interaction

Paper Title: Biofeedback game design: using direct and indirect physiological control to enhance game interaction


Authors: Lennart E. Nacke, Michael Kalyn, Calvin Lough, Regan Mandryk


Authors Bios:
Lennart E. Nacke PhD in Digital Game Development, was a postdoctoral research associate in the  interaction lab from 2010-2011. Since August 2011, he is an assistant professor in the game development and entrepreneurship program at UOIT. His research focuses on the creation and analysis of digital gaming environments and mechanics. He is interested in physiological plater-game interaction and in developing methodologies and tools for evaluating player emotion and attention.



Michael Kalyn is a summer student working for Dr. Mandryk. He is a graduate in Computer Engineering and in his 4th year of Computer Science. His tasks this summer will be related to interfacing sensors and affective feedback.

Calvin Lough is a student in the University of Saskatchewan.



Regan Mandryk is an Assistant Professor in the Interaction Lab in the Department of Computer Science at the University of Saskatchewan. Her main research areas are affective computing, ubiquitous and mobile gaming, and interaction techniques.


Presentation Venue: CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems that took place at New York (ACM)


Summary:
Hypothesis: The goal of the authors in this paper is to provide natural and realistic interaction and experiences to gamers. The authors propose a system of direct and indirect physiological sensor input to augment game control. Regan Mandrykhey discuss the concept of affective gaming which is an activity where the player's current emotional state is used to manipulate gameplay. Hence, this system sense's a player's emotion and arousal, and loop this information back into the system. 
How the hypothesis was tested: To investigate direct and indirect physiological control, the authors developed a single-player 2D side-scrolling shooter game that used standard controller mappings in Xbox360 shooter games. To evaluate the relative appeal of direct and indirect physiological control, participants played three versions of a game, two augmented with physiological input and one control condition. The sensor mappings and their respective thresholds for each game mechanic were developed using iterative prototype testing for five months, gathering feedback from more than 50 individuals before this study.
The study used a three-condition (2 physiological variations, 1 control with gamepad only) within-subjects design. The game was played on a Dell computer running Windows XP. 10 participants completed the study.

Results: Ratings data were analyzed with non-parametric techniques, while open-ended survey responses were clustered into overarching themes. 9 of the 10 players preffered to use physiological  control when asked whether or not they preferred to play with sensors. The players who enjoyed physiological control commented that having to use more than one input device would be nice to have in an immense game.


Discussion:
Effectiveness: The authors had the following main results:
1) The physiological augmentation of game controls provided a more fun experience than using only a traditional control scheme for game interaction.

2) Physiological control was a fun game mechanic in itself because it provided enjoyment by adding an additional challenging dimension to gameplay
3) Participants preferred physiological sensors that were directly controlled because of the visible responsiveness
4) Physiological controls worked most effectively and were most enjoyable when they were appropriately mapped to game mechanics
5) Indirect control was perceived as slow and inaccurate, and was not preferred; however, users recognized its potential to show passive reactions of the game world or as a dramatic device.
Reasons for being Interesting: This technology in game playing would lead to a complete different user experience and would thereby have a great impact on advantages of gaming. I really like how one would be able to control their progress in a game based on their ability to control their physiological reactions.
Faults: The authors have provided practical and logical solutions for the limitations of their prototype. There are no faults in their system.

Blog #17: Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Movbile Environment

Paper Title: Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment


Authors: Andrew Raij, Animikh Ghosh, Santosh Kumar and Mani Srivastava


Authors Bios: 
Andrew Raij is an Assistant Professor in the Department of Electrical Engineering at the University of South Florida. Previously, he held post-doctoral appointments at the University of Memphis and the University of Florida. He received a PhD in Computer Engineering from the University of Florida and an M.S. and B.S. in Computer Science from the University of North Carolina at Chapel Hill and Northwestern University resp.


Animikh Ghosh is a Junior Research Associate at SETLabs for Infosys Technologies Ltd. He is interested in wireless sensor networking, privacy risks involved in participatory sensing and database designing. 


Santosh Kumar is an Associate Professor at the Department of Computer Science at the University of Memphis. He lead the Wireless Sensors and Mobile Ad Hoc Networks Lab. He is engaged in both theoretical and systems research. In theoretical, he is recognized for his work on coverage and connectivity.


Mani Srivastava is on the faculty at UCLA as Professor in the Electrical Engineering Departmnt, with a joint appointment as Professor in the Computer Science Department. Before joining UCLA in 1997, he worked for about four and a half years at the Networked Computing Research Department at Bell Labs in Murray Hill, NJ.


Presentation Venue: CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems that took place at New York (ACM)


Summary:
Hypothesis: In this paper, the authors present the concept of wearable sensors that are revolutionizing healthcare and science by enabling capture of physiological, psychological, and behavioral measurements in natural environments. They mention about their study where they conducted a study to assess how concerned people are about disclosure of a variety of behaviors and contexts that are embedded in wearable sensor data. The authors analyze the data from three perspectives:
1) They assess how disclosure of different behaviors and contexts affect participant concern levels as their stake is increased in the data

2) They evaluate the impact of applying various restrictions and abstractions on concern level.
3) They assess the impact of re-identification on the concern level as the role of the data consumer is varied from the research team to the general public
How the hypothesis was tested: 66 participants were recruited from the student population at a 20,000+ student university in the United States. The participants volunteered to join one of two groups, one with no personal stake in the data (NS) or the one with a personal stake (S) in the data.

For three days, Group S collected physiological, behavioral, and psychological data using the AutoSense sensor system as they went about their normal everyday life. At the end of the 3 day period, group S participants completed a privacy questionnaire assessing their concern regarding disclosure of selected behaviors and contexts with various restrictions and abstractions applied.

Group NS participants had no exposure to continuous physiological, behavioral and psychological data collection. They did not wear AutoSense and did not review any data collected by it. They only completed the same privacy questionnaire as Group S. This allowed a between-subjects comparison of concern levels between participants with no personal stake in the data, Group NS, and participants with a personal stake in the data, Group S.

Results: The authors analyzed  participant data with respect to the three goals of the study as discussed in the preceding section. Two-taied t-tests were used to test for significant differences (p<0.05). In comparisons between Groups NS and S-Pre (different populations), unequal variances were assumed. In comparisons between Groups S-Pre and S-Post (same population), a paired t-test was used.
The results revealed that participants were most concerned about sharing physical and temporal context together. There was a trend of increasing concern for place and timestamp, with the lowest concern for just sharing the behavior or context, the next level of concern for reporting the place or time of the behavior or context, and the highest level of concern for reporting both the place and time of the behavior or context.




Discussion:
Effectiveness: The results of this experiment indicate that people cannot understand the potential threats in the data unless they have a personal stake in it. The experiment performed by the authors provides a very good idea to us that the community should examine disclosure of physiological context more closely. The paper suggests that the community should also examine how data consumers perceive privacy issues and what aspects of the data make it useful. Such a study would provide a better understanding of how to tradeoff behavior privacy and utility for physiological, psychological, and behavioral data collected by personal sensors.
Reasons for being Interesting: The study performed by the authors in this paper is imperative considering the amount of data being shared in the technologically advanced generation today.
Faults: The paper has many limitations but I did not really find any faults about it.

Monday, October 10, 2011

Blog #15: Madgets

Paper Title: Madgets: Actuating Widgers on Interactive Tabletops


Authors: Malte Weiss, Florian Schwarz, Simon Jakubowski and Jan Borchers


Authors Bios:
Malte Weiss is a 4th year PhD student at the Media Computing Group of RWTH Aachen University. His research focusses on interactive surfaces and tangible user interfaces.


Florian Schwarz is a Diploma Thesis Student at the Media Computing Group of RWTH Aachen University. His thesis is about actuated translucent on interactive tabletops. His supervisor is Malte Weiss.


Simon Jakubowski is a Student Assistant at the Media Computing Group of RWTH Aachen University. He is currently working on:
- Madgets
- Aachener Frieden
- SLAP


Jan Borchers is a professor of Computer Science and is also the head of Media Computing Group at RWTH Aachen University. With his research group, he explores the field of human-computer interaction, with a particular interest in new post-desktop user interfaces for smart environments, ubiquitous computing, interactive exhibits, and time-based media such as audio and video.


Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York


Summary:
Hypothesis: The authors of this paper present a system for the actuation of tangible magnetic widgets (Madgets) on interactive tabletops. Their system combines electromagnetic actuation with fiver optic tracking to move and operate physical controls. The presented mechanism supports actuating complex tangibles that consist of multiple parts. A grid of optical fibers transmits marker positions past our actuation hardware to cameras below the table. They introduce a visual tracking algorithm that is able to detect objects and touches from the strongly sub-sampled video input of that grid. Six sample Madgets illustrate the capabilities of their approach, ranging from tangential movement and height actuation of inductive power transfer. Madgets combine the benefits of passive, untethered, and translucent tangibles with the ability to actuate them with multiple degrees of freedom.
How the hypothesis was tested: The authors haven't tested their experiments using participants. Their goal when designing Madgets was to create devices that were flexible, lightweight, and easy to build and prototype. 
The authors have provided a very detailed description about the Hardware Setup:
Display
Actuation
Sensing
Widget Design


They have also given detailed instructions about how they went about the process of Tracking and also about the Exploration of the Design Space.


Discussion:
Effectiveness: The authors have provided various applications of Madgets for tangibles on tabletops:
General-Purpose Widgets: The authors' actuated table supports moving and configuring general-purpose Madgets, such as buttons, sliders, knobs, etc. A user can place a slider on the table to navigate through a video. After starting the video, the slider's knob follows the relative time position in the video, providing haptic feedback of its progress. The LCD panel displays the corresponding timeline slider visuals beneath the physical slider.
Going 3D: Height: Actuated tangibles have been mostly limited to 2-D movement. Using the authors' system, one can keep a Madget in plac and lift parts of it up from the table. For e.g. Buttons, and Clutches mentioned in the paper describe how they can be used to produce a 3-D movement on the table.
Force Feedback: Beside the inherent haptic feedback of tangible controls, actuation can provide active force feedback as an additional output channel. There are different types of feedbacks: Resistance, Vibration feedback and Dynamic notches.
Water wheel Madgets: The benefits of passive controls often come at the price of a restricted design space when creating them. In this section, the authors present two concepts to transfer energy from the actuation table to a Madget to enrich its functionality.


Reasons for being Interesting: The prototype suggested by the authors in this paper has very cool and interesting applications. Although the cost of the prototype might be a little more but it is very efficient. They authors have provided multiple applications for Madgets that are definitely worth the investment.
Faults: I did not find any faults with the prototype suggested in this paper. There are technical deficiencies but I believe this is a very efficient system.

Blog #14: TeslaTouch - Electrovibration for Touch Surfaces

Paper Title: TeslaTouch: Electrovibration for Touch Surfaces


Authors: Olivier Bau, Ivan Poupyrev, Ali Israr and Chris Harrison


Authors Bios: 
Olivier Bau is currently a PostDoctoral Research Scientist at Disney Research in Pittsburgh in the Interaction Design group. He received his PhD in Computer Science (HCI) at INRIA Saclay.


Ivan Poupyrev is currently a PostDoctoral Research Scientist at Disney Research in Pittsburgh in the Interaction Design group wit Olivier Bau. He is a career researcher in interactive technologies and interface design. His job is to come up with new ideas, concepts and research directions.


Ali Israr has his primary research in haptics. He focusses on understanding the science of touch, incorporating it in applications and interfaces, and then work with business units to commercialize it in emerging technologies.


Chris Harrison is a PhD student in the HCI Institute at Carnegie Mellon University. Currently, Harrison is investigating how to make small devices "big" through novel sensing technologies and interaction techniques. Designers have yet to figure out a good way to miniaturize devices without simultaneously shrinking their interactive surfaces.


Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York


Summary:
Hypothesis: In this paper, the authors present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface.
The authors present an alternative approach for creating tactile interfaces for touch surfaces that does not use any form of mechanical actuation. Instead, the proposed technique exploits the principle of electrovibration, which allows us to create a broad range of tactile sensations by controlling electrostatic friction between an instrumented touch surface and the user's fingers. When combined with an input-capable interactive display, it enables  wide variety of interactions augmented with tactile feedback.
The authors propose that there are four-fold contributions of this paper:
1) The principles and implementation of electrovibration-based tactile feedback for touch surfaces
2) They report the results of three controlled psychophysical experiments and a subjective user evaluation, which describe and characterize user's perception of this technology
3) The authors analyze and compare their design to traditional mechanical vibrotactile displays and highlight their relative advantages and disadvantages
4) They explore the interaction design space
How the hypothesis was tested:
To investigate the tactile properties of our approach, the authors combined it with a specific input-tracking technique: a diffuse illumination-based multitouch setup.


The authors performed multiple experiments to test their hypothesis:


Subjective Evaluation of TeslaTouch: 10 participants felt four TeslaTouch textures produced by four frequency-amplitude combinations: 80Hz and 400Hz each at 80 and 115 Vpp. These frequencies were perceptually distinct as they represent two ends of our test frequency range.
For each texture, participants filled out a three-section questionnaire. The first sectioned asked participants to describe each sensation in their own words. The second section introduced 11 nouns and asked participants to select nouns that described the tactile sensations as closely as possible.
Results: Low frequency stimuli were perceived as rougher compared to high frequencies. They were often likened to "wood" and "bumpy leather" versus "paper" and a "painted wall" for higher frequency stimuli.


Psychophysics of TeslaTouchIn this experiment, the participants stood in front of the interactive touch table instrumented with TeslaTouch tactile feedback. They were requested to wear an electrostatic ground strap on their dominant forearm and slide the pad of their index finger on the interactive surface. All participants completed detection threshold experiments before discrimination threshold experiments. In the absolute detection threshold experiments, participants were presented with two equally sized areas marked with letter A and B separated by a cardboard piece. Participants had eight seconds to compare areas A and B and respond by clicking a mouse button.
Ten right-handed participants took part in the detection threshold experiments. They conducted between 50 and 100 trials for each of the five reference frequencies.
Results: The detection and discrimination thresholds were analyzed across frequencies using repeated measures ANOVA with Greenhouse-Geisser correction for univariate analysis.
Absolute Detection Thresholds: The absolute detection thresholds for five reference frequencies are shown in Figure 7. There was a statistically significant effect of frequency on the threshold levels indicating that the threshold levels depend on the stimulus frequency.
Frequency Discrimination Thresholds: The effect of frequency of frequency on JND was statistically significant. Post-hoc comparison divided the frequency range into two groups.
Amplitude Discrimination Thresholds: The amplitude JNDs are presented as a function of reference frequency. The amplitude HNDs are also defined in dB units relative to the reference voltage. The ANOVA analysis failed to show significant effect of frequency on the amplitude JND indicating that the JND of 1.16 dB remains constant across all tested frequencies, thus obeying Weber's law.


Discussion:
Effectiveness: This paper introduced TeslaTouch: a new technology for tactile display based on electrovibration. This technology can be adapted to a wide range of input tracking strategies, and can be used in many applications. Four experiments were conducted to characterize users' perception of TeslaTouch, providing a foundation for designing effective tactile sensations. A comparison between mechanical actuation and electrovibration led to an overview of the TeslaTouch applications design space.
Reasons for being Interesting: I really found this paper cool because of the unique quality of TeslaTouch; only fingers in motions are stimulated. Therefore, it allows for multitouch tactile feedback so long as at each moment only one finger is moving on the surface.

Blog #13: Combining Multiple Depth Cameras and Projectors for Interactions On, Above and Between Surfaces

Paper Title: Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces


Authors: Andrew D. Wilson and Hrvoje Benko


Author Bios: 
Andrew D. Wilson is a senior researcher at Microsoft. Applications of sensing techniques to enable new styles of HCI is one of his research interests.


Hrvoje Benko is a researcher at the Adaptive Systems and Interaction Group at Microsoft Research. His research interests are related to novel surface computing technologies and their impact on HCI.


Presentation Venue: This paper was presented at UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology in New York


Summary:
Hypothesis: In this paper, the authors introduce the LightSpace, an office-sized room instrumented with projectors and recently available depth cameras. LightSpace draws on aspects of interactive displays, augmented reality, and smart rooms. For example, the user may touch to manipulate a virtual ovject projected on an un-instrumented table, "pick up" the object from the table by moving it with one hand off the table and into the other hand, see the object sitting in their hand as they walk over to an interactive wall display, and place the object on the wall by touching it.
The authors explore the unique capabilities of depth cameras in combination with projectors to make progress towards a vision in which even the smallest corner of our environment is sensed and functions as a display. With LightSpace, they emphasize on the following themes:Surface Everywhere
The room is the computer
Body as display
How the hypothesis was tested: The LightSpace prototype was showcased at a three day demo event to an audience of more than 800 people. This event tested the responsiveness and robustness of the system, and this was how the authors received valuable feedback.
Results: They noticed how more users in the space resulted in more mesh processing. A refresh rate below that of a camera (30Hz) was resulted if two or more users were present in the space.


Discussion:
Effectiveness: The users had no trouble using the interactive surfaces or performing through-body transitions to transfer objects, however picking up and holding objects in the hand required some practice. Occasionally an interaction would fail to be detected because another user or the user's own head occluded the camera's view of the hands or body. Also there were some connections when multiple users were present in the space. This showed that the prototype was not very efficient.
Reasons for being Interesting: This prototype can be used by affluent individuals or groups who require to use an interactive system that allows users to interact on, above and between interactive surfaces in a room-sized environment. I did not really find this prototype very interesting but it definitely has its uses.
Faults: The generally available physics engines do not support animated meshes except in limited cases such as in the simulation of cloth. Hence, this prototype has not effective use.