Multi-Event Scene Perception at an Ecologically Representative Time Scale

Document Type


Publication Date


Digital Object Identifier (DOI)



Research on scene perception is still in its infancy and, in general, has focussed on convergent processes in which scene information is integrated to arrive at a single label denoting category name or animal decision (e.g.,). However, also important in scene perception is the divergent perceptual ability of perceiving multiple events. Little is known about this ability in the context of continuous scene perception. Further, theories make different predictions: Attentional set theory predicts that switching out of a single task set is costly, whereas approaches emphasizing efficient bottom-up processing are consistent with efficient time sharing between multiple events. We developed a continuous event paradigm involving a 60 sec event-stream with an average of 12 simultaneously active events. The events were asynchronous and took time (4 sec average), like events in a typical real world scene. Observers could time share between events, as in real world perception. Experiment 1 examined the cost of switching between multiple events types, relative to single event conditions. The hit rate was 78.4% for single tasking, and fell to 64.3% for switching between multiple events. The 14.1% cost for switching tasks was reliable (and consistent with attentional set theory) but fairly modest in size. In fact, one could say that multiple event perception (MEP) was fairly efficient. Is there a basis for reasonably efficient MEP? A promising hypothesis comes from a principle that pervades designed spaces – that similar functions be grouped together. Is MEP more efficient when event types are organized by location? Experiments 2 and 3 provided strong positive evidence, showing that the cost of MEP (relative to single-tasking) is much higher (34.1% and 32.7% respectively) when event-types are distributed throughout space rather than organized by location. MEP is a significant and theoretically interesting aspect of scene perception.

Was this content written or created while at USF?


Citation / Publisher Attribution

Journal of Vision, v. 10, issue 7, art. 1257