CHAPTER 3: PERCEPTION CHAPTER SUMMARY Visual perception goes beyond the visual input displayed and goes further to interpretation by a series of network detectors influenced by the Gestalt principles, context, priming despite guidance of features, and overall configuration. Evidence from neuroscience studies demonstrates that the detection of features is separate from the processes needed to assemble these features into collective, complex whole and explains why the detection of features initiates recognition. Repetition priming assisted by tachistoscopic devices reveal correlations between high and low frequencies and their relative recognition threshold. These studies have also concluded information about the word-superiority effect, which refers to the fact that words are more readily perceived by isolated letters. In addition, well-formed non-words are more readily perceived than letter strings that do not conform to the rules of normal spelling. Another reliable pattern is that recognition errors, when they occur are quite systematic, with the input typically perceived as being more regular than it actually is. These findings indicate that the recognition is influenced by regularities that exist in our environment. Top down influences on recognition help tell us that object recognition is not a self-contained process but knowledge external to objection recognition is imported into and clearly shapes the process.
These findings can be understood in terms of a network of detectors. Each detector collects input and fires when the input reaches a threshold level. A network of these detectors can accomplish a great deal; for example, it can interpret ambiguous inputs, recover from its own errors and make inferences about barely viewed stimuli. The feature net seems to “know” the rules of spelling and “expects” the input to conform to these rules. However, this knowledge is distributed across the entire network and emerges only through the network’s parallel processing. This set up leads to enormous efficiency in our commerce with the world because it allows us to recognize patterns and object with relatively little input and under highly diverse circumstances. But thee gains come at cost of occasional error. This trade-off may be necessary, though, if we are to cope with the informational complexity of our world. A feature net can be implemented in different ways with or without inhibitory connections, for example. With some adjustments, the net can also recognize threedimensional objects. However, some stimuli for example, faces probably are not recognized through a feature net but instead require different sort of recognition system, one that is sensitive to relationships and configurations within the stimulus input.
Word Recognition: Features have a good idea of what features make up words and letters (unlike objects) letters make up words limited set of line segments make up letters (verticals, horizontals, simple curves, etc) How to study reading? -people are usually too good at reading to see any effects Tachistoscopic: – a device used to display very brief, fast images to increase object recognition speed How to measure reading? -recognition threshold (ms): length of time required in order to read a word -percentage recognized What makes word recognition easy or hard? Frequency (high frequency words have a lower recognition threshold) -people are faster to recognize happy than harpy, not that people aren’t familiar with the word harpy, but the same would happy for hippy or crappy Repetition Priming Context – involves both bottom up (putting features together) and top down processing (context) Word Superiority Well-formed Overregularization The Word Superiority Effect Words are easier to perceive than letters. For instance you would recognize a letter when its attached to subsequent letters opposed to the letter alone, irrespective if the word is real or not. However, the closer the word is to being a “real” word, the lower the recognition threshold. When people make reading errors, they tend to misread less common sequences as more common ones. “Tpum” will be misread as “trum” or sometimes “Drum” overregularization errors. Word Recognition: Feature Network Models Feature Net: network of detectors
Each detector always has an activation level – how active it is at the moment. If the input is stronger, the activation level will increase. More input = stronger activation, or a series of weaker ones Each detector has a response threshold – this is the activation level which will make it fire and activate the other detectors it connected to. If it has a high threshold it will take more input and higher frequency words. Each detector has its own baseline activation level, or resting level. The activation level prior to any inputs. Some have higher baselines than others, so some detectors will be easier to activate up to threshold (and fire off) than others. Higher frequency words may have higher baseline levels, easier to get to threshold even with brief presentations, and may result in lower response threshold (again easier to get to threshold) or vice versa. Repetition priming increases the baseline level of the word temporarily. After responding to a word, the current activation level of a word node is still active above baseline when the corresponding stimulus word is presented again Hence it takes relatively less activation energy and less time for the world node to reach the threshold Bigram-detectors account for the phenomenon of well-formedness. Overregularization errors -irregular strings are often misperceived
Chapter 3: Attention Inattentional blindness - failure to notice the existence of an unexpected item Change blindness - is the failure to notice an obvious change Inattentional blindness (Simons & Levin, 1908) An experimenter asked a person on the Cornell university quad for directions, and a group of confederates carrying a door walked in between conversation to and exchanged the experimenter (the one asking for directions). Processing Capacity and Selective Attention Bottleneck metaphor – is a narrow pathway of processing information. We can absorb a lot of information but at what point does the information processing slow down, become ignored, or attended? Broadbent, Early Selection (1958) -bottleneck occurs at or prior to the state of perceptual analysis -filter early sensory information based on physical characteristics -unconscious -unreliable theory, b/c salient info can pass through and low perceptual info Treisman (196)4 Attenuation Theory -filtering based on physical characteristics, but the unattended message is only weakened, not blocked entirely -with priming it can cause shadowing of compound sentences from each channel both attended and unattended Deutch & Deutch (1963) Later Filter -all sensory/perceptual info goes through to analysis, but the LATE filter blocks responding to one, which is then forgotten McKay (1973) -recall of meaning of shadowed sentences is biased by unattended words eg. River or money in unattended ear, while attended channel “joanne threw a ___ into a bank yesterday). River or money can be substituted from the unattended channel based on how they interpret bank in the attended channel Span of apprehension: sensory store, span of apprehension or iconic memory Sterling, 1960 The threshold of remembering letters in grid formation was 4.5 letters. Dichotic Listening Task (Cherry, 1953) -can tell if it’s a different voice in the unattended channel (can tell the physical properties of that unattended channel, but not the semantics)
-when shadowing the attended channel, they could not understand what was in the unattended channel Selective Reading Experiment (Neisser, 1969) Cocktail Party Effect (Moray, 1959) -salient information can pass through the unattended channel (eg. Your name – since it is well primed and causes a high activation baseline so it’s easy for your “name” detectors to fire” Different Types of Priming (Posner & Synder, 1975) Using mental chronometry to measure: Expectation priming (top down processing) and Repetition priming Low validity condition: Letter cue: low 20% probability that the test syllabus would contain the same letter, leading to no basis for expectation. Low cost for misled priming. High validity condition: Letter cue: high 80% probability that the test stimulus would contain the same letter (correct expectation) Priming increases the benefit of reaction time compared to the low validity condition. There is a huge cost in responding to the mislead priming. Repetition priming – warms up one detector but doesn’t have any effect on another detector Expectation based priming Produces a benefit when you get what you expect Mental tasks have a “cost” Unilateral neglect syndrome – damage to the right parietal hemisphere, usually caused by a stroke -spaced based but there is an interesting interaction with object attention Filtering is tuned to specfici distractors -practice is specific to the contect and the specific distractors -actively inhibiting or blocking distracting information ?? Negative priming (so called) -if you ignore an item on one trial, you are subsequently slower if you have to respond to that same item on the next trial eg. Green j and red e, next trial: green E, red P (identify the green letter)
Priming Metaphor -We promote/enhance/select for the inputs that are important -we do NOT do this for the distractors Think of this in terms of priming the detectors -selected input will be primed -distractors will not be primed, unless they already ARE primed (eg. Salient) Resources are limited Task specificity: There are task-general and task-specific resources Evidence for task-SPECIFIC resources -Shadow words presented to one ear while simultaneously memorizing: -pictures presented visually (easiest) words presently visually (easier) words presented in the other ear (hard) -tasks interfere with each other only if they use the same resources How many words are in this sentence? How many of you started to use your fingers? Evidence for Task-GENERAL resources -increasing resources of one task past some cirtical point will reduce performance even when the tasks are different (verbal, and spatial) -two tasks at once is usually the limitation -different tasks always interfere regardless of similarity Brooks, 1968 -perform verbal task requiring verbal or spatial response -perform spatial tasks requiring verbal or spatial response The Psychological Refractory Period PRP dual task paradigm: Typical data Effect illustrates the Unitary Tool hypothesis. Practice and Automaticity -turning effortful, serial processing into parallel, automatic processing Consistent Mapping Practice: Targets and distractors are drawn from different sets of letters: Varied mapping practices similar to negative priming Targets and distractors are drawn from the same set of letters: any letter can be a distractor or a target