Let’s consider what can be handled in a book about vision (intended to function as my design thesis) without causing radiation burns (see previous entry). Consider what you give up to enjoy the faint pleasure of stereoscopic perception.
Animals less concerned about stereoscopic vision enjoy a panoramic view of the world all the time: they keep their eyes on opposite sides of their heads and thus have access to a much wider field of vision than do we humans. I really envy them this, especially because we humans don’t really get much out of having our eyes close together on the fronts of our heads.
“Nuh-uh!” I hear you complain, “I get stereo vision because my eyes triangulate the positions of things in front of me!” While this is technically true, it’s not very useful despite what the marketers of fancy 3-d goggles and in-yer-face blockbusting movies would have you believe. I’ve heard it said (usually be agog fanboys fresh out of a screening of Avatar or films like it) that “3d cinema is still in its infancy” and will one day “get it right.” If so, it’s been an extremely prolonged gestation.
Why 3d is a crock:
It’s been about seventy years since western cinema began it’s torrid love affair with three dimensional representation. Not once in all that time have movie makers actually succeeded in making the sort of sea change that other new formats have enjoyed — such as Technicolor, wide-screen format, panoramic sound, etc. The problem, in a nutshell, is that human beings keep their eyes too close together.
Unless you’re some kind of mutant, your irises are only about three inches apart. That means that objects more than nine feet away are more likely to be “depth ranked” according to perceptual criteria like context, size, overlap and shading differentiation rather than stereoscopic discrepancy. That’s because the triangle describing the relationship between the subject’s eyes and the object’s position is just too long and skinny (too close to a single line, that is) to provide useful data to the part of the occipital lobe that processes depth in vision.
So the dimensions of the object (factored against their proximity to the observer) play a big part in determining the efficacy of three dimensional representation: things with minor distances between their front-most visible surface and their back-most look pretty much the same (beyond a certain distance) in two dimensions as they do in three. One can use stereoscopic differentiation across an isolated, one-eye-per-monitor system to make objects depicted less than nine feet away appear depth ranked by the proprioceptive relation of the amount of tension across the lenses of the eyes and parallactic geometry of their position alone, but beyond this distance the effect is not a simulation of perception as it occurs in the real world, but rather an artifact interpreting that perception as overlaid on another subject altogether.
So you can induce a reaction in the occipital lobe that mimics depth perception beyond the natural range of depth perception, but then, you can induce a lot of sensations artificially. Is cinema evolving into the orgasmatron? Yes. Yes it is.
Next week: how to compact the wide world into a narrow sample.