Chapter 12: Depth and Stereopsis

Report 2 Downloads 224 Views
Chapter 12: Depth and Stereopsis Binocular vision - 2 eyes at 2 horizontal locations - produces cyclopean perception (unified view) A. Extra-Personal Space - object direction and distance 1.

2.

Spatial frames of reference - there is a set of axes to describe position and placement - allocentric frame = independent frame of reference to the viewer - egocentric = frame of reference dependent on viewer -> egocentric direction - reference point = visual egocentre “cyclopean eye” (midpoint between 2 eyes) Head + Body Movements in Space Human Dynamics

Proprioception - detection of muscle contraction and movement around a joint - efference copy: perceptual centers of brain receive a copy of motor commands Vestibular System - “balance organ” - detect head and body movement - in inner ear 1) semicircular canals (rotation) 2) otoliths (linear movement, forward, backward, vertical) - based on fluid movement which stimulates hair cells in those 2 regions (neurons) - brain knows we moved, not the world itself, so we see the world as stable 3.

Absolute Depth Optical cue (retinal image size) - object distance changes retinal image size

Accommodation - reflex that changes the optical power of our eyes - near object = increase in optical power to make a sharper image than far objects - increases thickness of crystalline lens -> controlled by ciliary muscle - only useful for objects 3 m or closer Vergence (eye movements) - eyes rotate horizontally in sockets so the object falls on the fovea - convergence: both eyes roll inward (closer object) - divergence: both eyes roll outward (far object) Afference + Efference - both of the following are involved: - efference copies may be relayed to frontal cortex - afferent signal: signal sent from periphery (eye muscles) to brain B.

Monocular Depth Perception & Cues - relative depths (where objects are located with respect to each other in distance along the 3rd dimension)

1.

Pictoral Cues

- stationary information in 2D scenes/pictures Occlusion - if an object covers another, the covered object is father away - not innate (learned at 5-7 months) Relative Size - the smaller an object appears, the farther away it is perceived to be Texture Gradient - foreground texture is coarse Linear Perspective - objects become smaller as they recede Aerial Perspective - hazy quality of distant environmental objects due to light scattering in atmosphere Shading and Shadows - light reflections of curved surfaces - “shape from shading” (gives shape cue) Image Blur - out of focus objects - as blur increases, relative depth in relation to focus object increases 2.

Dynamic Cues

Kinetic Depth - watching a moving object while you are stationary - closer objects appear to move faster Motion Parallax - moving viewer, stationary object - optic flow (relative movement of passing objects) - change in object’s direction of movement caused by self-motion is “motion parallax”

- if observer is moving way and objects behind focus point seem to move magnitude of movement depends on how far the object is from the fixation point - Optic flow (James Gibson) - motion gradients caused by looming and expansion give depth information (also involved in balance and heading in movement) Accretion and Deletion - accretion - appearance of objects in front of an edge - deletion - disappearance of objects behind an edge - objects that disappear behind a curtain are farther 3.

Size Perception and Constancy

- how object distance affects size perception Size-Distance Relationship - eyeball image size depends on how far an object is - closer = larger image size Size Constancy - perceived size and object distance aren’t directly related - mental impression of size conforms to actual object size regardless of object distance Mechanisms: 1) We learn about true physical size through visual experience 2) Relative sizes to all other objects around Emmet’s Law - Afterimages (fleeting sight after prolonged staring) - perceived size of afterimage depends on distance @ which it was projected - a purely retinal phenomenon

4.

Illusions of Size and Depths

Absolute Depth Illusion - moon illusion -> moon largest @ horizon, smallest overhead (zenith moon) - due to apparent distance theory: humans believe the horizon is farther away than the sky - size-distance relationship: retinal images are smaller as the distance increases -> we see the moon to be larger on the horizon Relative Depth - Ponzo Illusion - Railroad tracks - lower line looks smaller - tracks receding into background make lower line look closer (pictoral cue throws us off) - size-distance relationship is satisfied if we believe that something is farther and thus larger even though retinal images are identical

Relative Size - Ames Room Illusion - can’t tell how far away someone is - retinal image of one side is much smaller than other because they’re farther away, but we don’t know that - lines in the room make the room look normal - breakdown of size constancy

C.

Binocular Depth Perception

- 2 eye vision is up to 10x better at relative depth perception than 1 eye 1. Advantages of Binocular Vision Binocular Interaction and Summation - stimulus of 1 eye affects accommodation and convergence reflexes for both eyes - same with pupil size - visual stimulation through both eyes triggers greater activity in V1 than 1 eye does - binocular summation = detection thresholds are lower in binocular viewing than monocular viewing Visual Fields - largest with binocular vision ~180 degrees instead of 150 degrees Binocular Fusion and Stereopsis - frontally placed eyes: capture same scene with 2 viewpoints - binocular fusion: 1 image from 2 retinal images (cyclopean vision) - 3D information comes from 2 eye differences = stereopsis - can’t see behind us (better for predators) 2.

Stereoscopic Cues and Binocular Disparity

- how stereoscopic depth perception arises through stereoscopic cues and then the brain deciphers cues Horopters - Vieth-Muller circle (hypothesized by Veith and Muller, physiologists)

- image of an object formed @ analogous sites on both retinae according to the hypothetical circle (set of points in environment that produce analogous retinal sites = horopter) - human horopter (empirical horopter) differs from V-M circle Corresponding Retinal Images - optically analogous points on 2 eyes = corresponding retinal images Retinal Disparity - fovea distances (d(sub t) = d(sub n)) where t = temporal and n = nasal are equal for corresponding retinal images pairs - retinal disparity = difference in location of binocular images D = d(sub t) - d(sub b) = 0 - not true for objects behind/in front of the horopter Crossed Binocular Disparity - objects not on the horopter - d(sub t) > d(sub n) greater temporal distance from fovea than nasal image -D>0 Uncrossed Binocular Disparity - temporal image closer to fovea than nasal image - d(sub n) > d (sub t), D < 0 3.

Neural Processing of Binocular Disparity

- disparate retinal images created by optic principles and geometry - objects in left visual field trigger neurons in right visual cortex Binocular Neurons - 1st seen in V1 - ganglion cells of 2 retinae, when stimulated by the left visual field, area A, trigger neural activity in a particular binocular neuron in V1 Disparity Selectivity Emerges in Visual Cortex - with D = 0, 2 ganglion cells must be located at corresponding retinal points for a binocular neuron in V1 to be stimulated by object A

- a circuit created by feeding the output of 2 ganglion cells into a single binocular cortical neuron results in relative depth perception - binocular neurons that encode un/crossed disparity theoretically exist Stereoscopic Depth Perception - result of neural convergence of disparate stimulation of the 2 retinae upon individual binocular neurons in the visual cortex Panum’s fusional area - 3 types of disparity selective neurons: - 1) zero-tuned 2) near tuned (crossed) 3) far tuned (uncrossed) - most tuned to zero disparity - range is finite - diplopia: images are processed independently = double vision b/c there aren’t binocular neurons to stimulate - Panum’s fusional area: the limited depth range in front of and behind the horopter Midline Stereopsis Problem - when the object is located off the horopter and between the lines of sight of the 2 eyes, there are binocular neurons that are activated - integration between opposite hemispheres done by corpus callusum - “midline stereopsis” Stereo-Blindness - stereopsis not possible with monocular vision (blind in 1 eye) - strabismus (misaligned eyes) leads to stereo-blindness 4.

Binocular Correspondence

- stereoscope led to 3D movies Stereogram - projects 2 separate images on each eye, giving depth - free fusion: crossing your eyes to see an overlapped image (geometry trick) Correspondence Problem - random-dot stereograms:

- visual system takes a point-by-point match b/w 2 retinal images Binocular Rivalry/Suppression - 2 eyes given different images - only one orientation will be perceived briefly, then it switches back and forth Singleness of Vision = binocular integration + binocular suppression