Properly Controlling Light Is a Human-Factors Engineering Problem

Properly Controlling Light Is a Human-Factors Engineering Problem

by Jim Larimer

The idea that through every point in space rays of light are traveling, carrying information about the surfaces of objects the light has interacted with as it ricochets through space, is a concept as old as the camera obscura.  Leonardo da Vinci referred to the light passing through a point in space as a “radiant pyramid.”1  It has only been possible recently to use the insights da Vinci had 500 years ago to build imaging products that can capture and replicate some of the information contained in these radiant pyramids.  Today, we stand on the threshold of that possibility.  Light-field imaging, or as it is also called, computational photography, is now the future for cameras and displays.

To effectively use these ancient insights requires an understanding of how optics, computing, and human vision work together.  To create stereo-pair imagery that requires no head gear and that will work at any head orientation relative to the display requires two different images for each head position separated by 6 cm and containing information unique to each eye location.  A light-field display holds the promise that this may soon be feasible.

It is even possible to imagine a future where every aspect of looking through an ordinary window, including focusing your eye anywhere that attracts your attention, can someday be replicated on a display.  Several seemingly solvable technical hardware problems need to be worked out to do this.  Equally important will be determining how many rays must pass through the observer’s pupils from every point on the virtual objects for the rendered scene to seem real to the observer.

One of the articles in this issue of Information Display, by Gordon Wetzstein, delves into those insights and relates how this new and exciting technology will work.  Compressive light-field ideas tap the advantage provided by the relatively long temporal integration time of the human eye compared to the speed of electrical devices.

Da Vinci was aware that if several pinholes are placed in the front surface of a camera obscura, several radiant pyramids will form images on the rear surface of the camera.2  These images are slightly different from each other due to parallax, and they are not in register.  As the number of pinholes is increased, the resulting projections lose the distinctive sharpness of the camera obscura image and tend towards blur.

The entrance aperture of a camera, and the pupils in our eyes, collect a connected set of pinholes, i.e., a finite sample of the light field; without a lens the rays passing through the aperture would form a blurry image on the retina or on the projection surface of a camera.  A larger aperture allows more light to reach the projection surface, making the resulting image brighter but also less sharp.  With a lens placed in the plane of the aperture, objects in the visual scene that are at the focal distance of the lens are brought into register on the retina or projection surface.  Objects that are nearer or farther away from this focal distance remain blurry in projection.  The distances within which the edges formed in the image are sharp-appearing define the depth of field of the camera and lens.  Lenses were discovered within a couple of hundred years of the camera obscura and were soon incorporated into these cameras to form a brighter image, trading off brightness for depth of field.

At the time of the discovery of the camera obscura and the lens, there was no way to exploit the sharpness and parallax information contained in each pinhole camera projection; today, that is no longer the case.  The technology to do this is now becoming rapidly available.  These innovations would not be possible, however, without high-speed powerful graphical computing and without important insights into how we perceive the world.  These insights began to be clearly understood a little fewer than 20 years ago.

Adelson and Bergen3 characterized the light passing through a pinhole or any arbitrary point in space as a function of the location of a point in space, the azimuth, and elevation angles of each direction through the point and the spectral power distribution of the light coursing through the point in each direction.  They called this the plenoptic function; a contraction of plenus for full or complete and optics – full optic.  Their plenoptic function corresponds to da Vinci’s radiant pyramids.  The Russian scientist Arun Gershun coined the term “light field” in a classic 1936 paper4 describing the radiometric properties of light.  The idea is even older and was introduced by Faraday in a lecture given in 1846 entitled “Thoughts on Ray Vibrations.”  The light field is a connected set of plenoptic functions – all the points on the surface of a window or within the plane of a camera’s entrance aperture, for example.

Adelson and Bergen went on to describe the many ways biological vision has evolved to exploit the information in the light field.  We sample the parallax information contained in the blurry image formed by our eye to give us cues about the distance to objects.5  In a single eye’s image, this information is contained in the blur and sharp regions of the image.

When the two eyes of an animal have overlapping visual fields, then the displacements of objects in one eye’s image relative to the image formed in the other eye is another cue to distance called disparity.  Many animals, such as sheep, have only a small region in which the visual fields of their two eyes overlap.  These animals have eyes looking to both sides simultaneously.  In these eyes and for the non-overlapping portions of the visual field, focus yields important information about distances.  Some animals, such as sheep, have apparently evolved different shaped pupils to use blur information in one axis to estimate depth in one plane while maintaining sharpness in the orthogonal plane.6

The psychologist J. J. Gibson7 described the light field passing through a volume in space as the “. . . permanent possibilities of vision . . ,” meaning all of the information at the location of an observer’s eyes that can be perceived or used by the observer.  Gibson believed that perception is based upon our sensory apparatus being capable of detecting information in the immediate environment that can be acted upon by the perceiver.  According to Gibson, “. . . what we perceive when we look at objects are their affordances, not their qualities,” and later he said, “ . . . what an object affords us (actions we can take) is what we normally pay attention to.”  This suggests that evolution has produced sentient mechanisms to extract information from the environment and a cognizance capable of acting on it.

We use our eyes to avoid hazards such as spoiled food and dangerous substances and to judge the health and therefore the risks presented by people near us.  These actions need not be conscious and willful.  Without thinking, for example, we avoid stepping on something that appears yucky.  The color and textures of surfaces are how we conceive and recognize these hazards.  The light imaged on the retina depends upon the reflectivity of the surfaces of objects in the visual field and the spectral content of the light illuminating them.  What we perceive can be dramatically altered by changes in either of these two important variables.

A second article in this issue, by Lorne Whitehead, probes the quality of light emitted by displays and from ordinary lighting fixtures as it impacts our ability to see surface colors.  Our color sensibilities evolved in an environment in which the illuminants were skylight, starlight, or firelight.  Today, we illuminate our work and home spaces with artificial light sources that will soon be mostly based upon solid-state technologies, i.e., LEDs and OLEDs.  Will the colors of familiar objects look right in this light?  Whitehead’s article describes the engineering trade-offs that are being made to produce light from displays and from solid-state luminaries.  Some of these trade-offs will confuse our senses.  His article describes how to fix this problem for both ordinary room lighting and for displays.

There is mounting evidence that information in the light field, i.e., the intensity and spectral properties of daylight over the course of the day, is used to synchronize our biological clocks or circadian rhythms.8  A recent report finds that light emitted from backlit e-Readers when used in the evening can have a negative effect on sleep and morning alertness.9  When modern electronic-imaging systems leave out important information, distort it, or exaggerate naturally occurring signals embedded in the light field that we sense to relate our bodies, the consequences can be unexpected.  We now have to consider the possibility that signals in the light may cause not only sleep loss, but possibly depression and even, some speculate, the early onset of puberty.  These are emerging concerns for the display and lighting industries.  Not fully capturing all of the critical information, i.e., affordances, in the light field, can be pleasing, misleading, or actually bad for us.

Whitehead’s article describes how solid-state light emitters can be engineered to address these problems.  As new understanding of light’s role in human vision and biology emerges, so does new technology that can remedy problems we are just now discovering.  Whether it is making objects look “right” or controlling the spectral content of the light to get a better night’s sleep, the evolution of technology now allows us to manipulate light so that it is consistent with the evolution of human vision.

References

1M. Kemp, Leonardo on Painting (Yale University Press, New Haven, 1989).

2J. P. Richter, The Notebooks of Leonardo da Vinci, Vol. 1 (Dover, New York, 1970).

3E. H. Adelson and J. R. Bergen, The Plenoptic Function and the Elements of Early Vision: Computational Models of Visual Processing, edited by M. Landy and J. A. Movshon (MIT Press, Cambridge, MA, 1991), pp. 3–20.

4A. Gershun, "The Light Field," Journal of Mathematics and Physics Vol. XVIII, 51–151 (1939) (Translated by P. Moon and G. Timoshenko, Moscow, 1936).

5R. T. Held, E. A. Cooper, and M. S. Banks, “Blur and Disparity Are Complementary Cues to Depth,” Current Biology 22, 1–6 (2012).

6W. Sprague and M. S. Banks, Lecture given at the Optometry School at the University of California Berkeley with data (2013).

7J. J. Gibson, “The Theory of Affordances,” in Perceiving, Acting, and Knowing, edited by R. Shaw and J. Bransford (1977).

8M. E. Guido, E. Garbarino-Pico, M. A. Contin, D. J. Valdez, P. S. Nieto, D. M. Verra, V. A. Acosta-Rodriguez, N. de Zavalía, and R. E. Rosenstein, “Inner retinal circadian clocks and non-visual photoreceptors: Novel players in the circadian system, Prog. Neurobio. 92, 484–504 (2010).

9A. Chang, D. Aeschbach,  J. Duffy, and C. A. Czeisler, “Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness,” PNAS 112/4, 1232-1237 (2015).  •

 


Jim Larimer received a degree in experimental psychology from Purdue University and was a postdoctoral fellow at the Human Performance Center at the University of Michigan.  He was a Professor of Psychology at Temple University and conducted basic research on color vision. At NASA, he was a Senior Scientist and participated in the ARPA-funded High Definition Systems program which supported many of the early developments in flat-panel displays. He is now retired and works occasionally as a consultant on digital imaging.” Jim can be reached at jim@imagemetrics.com.