Cameras and Lenses – Bartosz Ciechanowski

Created
Jan 19, 2023 8:01 PM
Tags

The image sensor of a digital camera consists of a grid of photodetectors. A photodetector converts photons into electric current that can be measured – the more photons hitting the detector the higher the signal.

In the demonstration below you can observe how photons fall onto the arrangement of detectors represented by small squares. After some processing, the value read by each detector is converted to the brightness of the resulting image pixels which you can see on the right side. I’m also symbolically showing which photosite was hit with a short highlight. The slider below controls the flow of time:

The longer the time of collection of photons the more of them are hitting the detectors and the brighter the resulting pixels in the image. When we don’t gather enough photons the image is underexposed, but if we allow the photon collection to run for too long the image will be overexposed.

While the photons have the “color” of their wavelength, the photodetectors don’t see that hue – they only measure the total intensity which results in a black and white image. To record the color information we need to separate the incoming photons into distinct groups. We can put tiny color filters on top of the detectors so that they will only accept, more or less, red, green, or blue light:

This color filter array can be arranged in many different formations. One of the simplest is a Bayer filter which uses one red, one blue, and two green filters arranged in a 2x2 grid:

A Bayer filter uses two green filters because light in green part of the spectrum heavily correlates with perceived brightness. If we now repeat this pattern across the entire sensor we’re able to collect color information. For the next demo we will also double the resolution to an astonishing 1 kilopixel arranged in a 32x32 grid:

Note that the individual sensors themselves still only see the intensity, and not the color, but knowing the arrangement of the filters we can recreate the colored intensity of each sensor, as shown on the right side of the simulation.

The final step of obtaining a normal image is called demosaicing. During demosaicing we want to reconstruct the full color information by filling in the gaps in the captured RGB values. One of the simplest way to do it is to just linearly interpolate the values between the existing neighbors. I’m not going to focus on the details of many other available demosaicing algorithms and I’ll just present the resulting image created by the process:

Notice that yet again the overall brightness of the image depends on the length of time for which we let the photons through. That duration is known as shutter speed or exposure time. For most of this presentation I will ignore the time component and we will simply assume that the shutter speed has been set just right so that the image is well exposed.

The examples we’ve discussed so far were very convenient – we were surrounded by complete darkness with the photons neatly hitting the pixels to form a coherent image. Unfortunately, we can’t count on the photon paths to be as favorable in real environments, so let’s see how the sensor performs in more realistic scenarios.

To maintain the overall brightness of the image when stopping down we’d have to either increase the exposure time or the sensitivity of the sensor.

Observe what happens to that beam when it hits a piece of glass. You can make the sides non-parallel by using the slider:

What we perceive as white light is a combination of lights of different wavelengths. In fact, the index of refraction of materials depends on the wavelength of the light. This phenomena called dispersion splits what seems to be a uniform beam of white light into a fan of color bands. The very same mechanism that we see here is also responsible for a rainbow.

In a lens this causes different wavelengths of light to focus at different offsets – the effect known as chromatic aberration. We can easily visualize the axial chromatic aberration even on a lens with spherical aberration fixed. I’ll only use red, green, and blue dispersed rays to make things less crowded, but remember that other colors of the spectrum are present in between. Using the slider you can control the amount of dispersion the lens material introduces:

Chromatic aberration may be corrected with an achromatic lens, usually in the form of a doublet with two different types of glass fused together.