Thursday, June 11, 2009

Change Blindness and Its Implications for Complex Monitoring and Control Systems Design and Operator Training

Durlach, P. 2004. Change blindness and its implications for complex monitoring and control systems design and operator training. Hum.-Comput. Interact. 19, 4 (Dec. 2004), 423-451. DOI= http://dx.doi.org/10.1207/s15327051hci1904_10

Summary:

Durlach from The Army Research Institute discussed various aspects of change blindness's affects on important monitoring systems, such as airport traffic control.

One factor mentioned in the study is that the longer between screen updates (e.g., distractions), the more likely change blindness occurs. If the screen updates are almost non-existent, the changes are detected in 1-2 flashes. If the blank screens are ~80 ms, the detections are seen in about 17 alterations.

Other factors that affect change blindness include: distractions, discriminability (red vs. burgandy, red vs. white), categorization (tank vs. truck), biased serial search (rescanning same areas), amount of information, external attention capture, prior learning from a task (repeated or predictable change), meaningfulness of the change, and the user's expertise in the change area.

To help eliminate change blindness, Durlach proposes to reduce screen clutter, make any items easily discriminable, and train users on the systems.


Discussion:

There's no silver bullet to combat change blindness and inattentional blindness, and Durlach recognizes this. Her suggestions make sense, and she has a great list of pros and cons to accompany them. As tasks become more complex, there's always a sacrifice with making software that can handle the complexity while minimizing potential user errors.

Beyond Modularity

Karmiloff-Smith, A. Beyond Modularity: A Developmental Perspective on Cognitive Science. MIT Press. November, 1992.

Summary:

Piaget's child development theory
describes how children develop when their minds mature with age. He observed transitions and major events when this occurs, such as when children learn object permanence.

Karmiloff-Smith presents research challenging the assumption that growth happens in such steps. Instead, the author shows how many human functions (language, math, physics, drawing) are innate in very young children even before they are verbal. For instance, infants look longer at images that even humans would consider novel (such as objects that do not obey a perceived grouping, p. 68).

Discussion:

This book was really well-written and I enjoyed the break from regular computer science reading to read an almost pure psychology book. The idea that drawing is innate in humans is reaffirms our lab's claim that sketching is "natural and intuitive".

Drawing and the Non-verbal Mind

Lange-Kuttner, C. and Vitner, A. "Drawing and the Non-Verbal Mind: A Life-Span Perspective." Cambridge University Press, September 15, 2008.

Summary:

The editors discussed hundreds of experiments dealing with drawings, most focused on children.

Some interesting points of note are:
  1. Young children (3-4 yrs) often cannot recognize their own drawings after some time has passed since the original. (p. 55)
  2. Children often have a "constant depiction strategy", such as drawing everything as a sunburst or as a scribbled dot. The depiction looks closer to the actual object with age. (p. 64)
  3. A drawing can be affected by the question and how the child interprets the objects, such as individual objects or in a group. Grouping of objects can happen more often if the objects are similar: "two circles" vs. "circle and triangle" (pp. 165-173)
  4. People suffering from diseases such as semantic dementia often forget the distinguishing characteristic of an object they should be drawing after a short period of time. (Rhino -> generic animal, p. 286)

Discussion:

The findings presented are too numerous to list, so I simply mentioned the ones I found most interesting. Actual child or mental development would be difficult to measure using sketch recognition techniques (the drawings are simply too abstract). If I ever work with children, items 2 and 3 will probably be helpful with either distinguishing between children or simply with the phrasing of the questions to the children.

Brain Mechanisms of Vision

Hubel DH, Wiesel TN. Brain Mechanisms of Vision. Scientific American. 1979 Sep; 241(3):150-62

Summary:

The brain's primary visual cortex processes images in a modular, distorted way. The rods and cones in the eyes send messages from the retina to the geniculate cells in the brain, which then relay the message to the visual cortex. These geniculate cells are in a layer called Layer IV, are unsophisticated, and receive the bulk of the visual input.

Cells outside of layer IV have "orientation specificity", where a bar of light falling in a certain orientation will activate some cells and have no affect on others. The response for each cell appears to be around 10-20 degrees, at which point the response is lessened or abolished.

At the time (1979), there was no evidence to support that the orientation specific cells had anything to do with visual perception.

As electric signals moved into more complex layers of the visual cortex, some patterns emerged. Cells close to one another often have the same optimal stimulus orientation. Changes in orientation happen in small increments, such as 25-50 micrometers between cell groups mapping to a change of 10 degrees with varying direction reversals.


Discussion:

Really interesting information on how the structure and hierarchy of the primary visual cortex. Although the orientation information did not prove that the brain recognizes shapes using features such as line orientations, other papers citing this one might. I'll have to find some...

Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human–Computer Interface Design

Varakin, D. A., Levin, D. T., and Fidler, R. 2004. Unseen and unaware: implications of recent research on failures of visual awareness for human-computer interface design. Hum.-Comput. Interact. 19, 4 (Dec. 2004), 389-422. DOI= http://dx.doi.org/10.1207/s15327051hci1904_9

Summary:

The authors mention some research on inattentional blindness and change blindness and provide anecdotal evidence of their usage in computer interfaces
  • Inattentional Blindness - user is unaware of a change occurring within the same field of view
  • Change Blindess - user is unaware of a change occurring across multiple views

Change blindness: past, present, and future

Daniel J. Simons, Ronald A. Rensink, Change blindness: past, present, and future, Trends in Cognitive Sciences, Volume 9, Issue 1, January 2005, Pages 16-20, ISSN 1364-6613, DOI: 10.1016/j.tics.2004.11.006. (http://www.sciencedirect.com/science/article/B6VH9-4DXTHVD-2/2/d3451247e53c70b0b390450a275a475a)

Summary:
The authors provide an overview of change blindness understanding, such as how research has shown that change blindness occurs often during eye movement or when a user's attention wanes.

The main contribution of the paper is the idea that change blindness research does not confirm the thought that visual representations of a scene are 'sparse'. The authors propose four requirements for a change blindness to reaffirm the idea of sparse representations:
  1. Evidence must eliminate the possibility that detailed visual representations exist by fade from memory before the representations can be compared with others to perceive changes
  2. Evidence must eliminate the possibility that detailed visual representations exist, but in a different visual processing section (of the brain?) that cannot compare with the currently viewed representation for change detection
  3. Evidence must eliminate the possibility that any stored detailed representation is in a format that cannot be compared with another representation
  4. Evidence must eliminate the possibility that both the stored detailed representation and the viewed representation can be compared, but are not for some reason
Discussion:

The paper's final thoughts on how a representation are stored do not concern me. Instead, this paper has a wide bibliography of change blindness research that should help me to look for related work.

Sunday, February 22, 2009

CogSketch: Open-domain sketch understanding for cognitive science research and for education

Summary

CogSketch presents a sketch recognition system wrapped in psychological syntax. Users draw single stroke glyphs that can be containment glyphs (symbols) or connection glyphs (relationships). Glyphs are recognized through a focused knowledge base of information that can be specified by the user. Inter-glyph relationships are computed using RCC-8.

On the interface side, the system contains layers that have modes.

Lastly, CogSketch has simulations that can be conducted. The two simulations are analogies (A is to B as C is to ?) and spatial language learning (inside, above, below, etc.).