Wednesday, December 14, 2011

Blog #32: Taking advice from intelligent systems: the double-edged sword of explanations

Paper Title: Taking advice from intelligent systems: the double-edged sword of explanations

Authors: Kate Ehlrich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross and Daniel Gruen

Author Bios:
Kate Ehlrich: is a Senior Technical Staff Member in the Collaborative User Experience group at IBM Research where she uses Social Network Analysis as a research and consulting tool to gain insights into patterns of collaboration in distributed teams.

Susanna Kirk: did her masters in Human Factors in Information Design. Her coursework is in uder-centered design, prototyping, user research and advanced statistics

John Patterson: is a Distinguished Engineer (DE) in the Collaborative User Experience Research Group

Jamie Rasmussen: joined the Collaborative User Experience group in 2007 and is working with John Patterson as a part of a team that is exploring the notion of Collaborative Reasoning.

Steven Ross: is presently working in the area of Collaborative Reasoning using semantic technology to help individuals within an organization to think together more effectively and to enable them to discover and benefit from existing knowledge withing the organization in order to avoid duplication of effort

Daniel Gruen: is currently working on the Unified Activity Management project.



Presentation Venue: IUI '11 Proceedings of the 16th international conference on Intelligent user interfaces. ACM New York, NY, USA

Summary:
Hypothesis: If the authors can investigate intelligent systems and its justifications, then maybe the accuracy of these systems will increase and users will not be "led astray" as much as they are being right now.
How the hypothesis was tested: The authors decided to conduct a study on the effects of a user's response to a recommendation made by an intelligent system as well as the correctness of the recommendation. In this case, it was conducted on analysts engaged in network monitoring. The authors used a software called NIMBLE to help collect data for this study.
Results: The users performed slightly better with a correct recommendation than without one. Results indicated that justifications grant benefits to users when a correct response is available. When there is no correct response available, neither suggestions nor justifications made a difference in performance. Most of the analysts seemed to discard the recommendations anyway.
In the separate study concerning analyzing user's reactions, it was found that users typically follow the recommendations given and that the influence between the recommendation and the user's action is high.

Discussion:
Effectiveness: Although this paper has great applications, the accuracy of their system is a little skeptical. This technology can be implemented to either extremely specific fields or extremely general ones, that way the recommendation would be either extremely accurate due to less training data or it would be extremely general (in which case it would be okay for people to ignore it sometimes).

No comments:

Post a Comment