|
- 2018
The Risks of Discretization: What Is Lost in (Even Good) LevelsKeywords: automation,cognitive models,level of automation,human–automation interaction Abstract: In this reaction to David Kaber’s article in this volume, the author points to an inherent problem in applying any “levels” scheme to the continuous, multidimensional space of human–automation relationships and behaviors. Discretization inherently carves a continuous, analog space into discrete blocks that, the claim is, one can treat homogenously. The author provides a counterexample using a common automated e-mail filtering system as an example of how applying a single “level-of-automation” category to the whole system (or even to information-processing stages of components within it) misrepresents and suppresses details about what the system is actually doing and how it interacts with human users. Discretization can be highly productive if it pares away confusing detail that distracts from underlying explanatory relationships, but, the author argues, not enough is known about human–automation interaction in all its variability to effectively suppress detail. Thus one needs the better models Kaber is calling for before being able to create an effective levels-of-automation scheme, not vice versa
|