19  Image system modeling

Published

October 15, 2025

Work in Progress

The book is still taking shape, and your feedback is an important part of the process. Suggestions of all kinds are welcome—whether it’s fixing small errors, raising bigger questions, or offering new perspectives. I’ll do my best to respond, but please keep in mind that the text will continue to change significantly over the next two years.

You can share comments through GitHub Issues.

Feel free to open a new issue or join an existing discussion. To make feedback easier to address, please point to the section you have in mind—by section number or a short snippet of text. Adding a label characterizing your issue would also be helpful.

Last updated: October 15, 2025

19.1 Image system modeling

We characterize imaging systems using models that describe overall system performance, rather than focusing on every physical detail. For example, we might measure the point spread function of a lens assembly instead of analyzing each individual lens element, or we might quantify the pixel read noise without specifying the silicon feature size.

This approach is known as creating a phenomenological model. Such models are informed by the hardware’s structure, but emphasize how the system behaves as a whole. They help identify which aspects of the system limit performance—for instance, high read noise—but do not specify whether the limitation is due to the choice of materials, circuit design, or other underlying factors.

ISETCam has such a model. We open by describing the basic model of the image sensor, and then illustrate how to make measurements to characterize the sensor, thus filling in parameters of the model.

Standard sensor designs are effective for many uses, which is why they are produced in large quantities. However, they have limitations: color filters can reduce sensitivity and restrict spectral information, while circuit elements can introduce noise or limit dynamic range and speed. These challenges have driven innovations in sensor and pixel design, especially for applications beyond traditional photography. Later sections will explore advanced pixel architectures, new materials, and on-chip processing techniques that address these issues. As this book is online, sections about these advances can be updated as new technologies emerge as well as links to additional resources.

19.2 Noise experiments

Quantization. Noise. Use Abbas’ class notes, which are downloaded to Psych 221, and modeling in ISETCam.

19.3 Spatial sensitivity (ISO 12233)

Describe it computationally. Illustrate with ISETCam.

19.4 Color calibration

Does this need to be put off until after color is introduced?

19.5 ARVS mobile photography article

I was reading Delbracio et al. (2021) and it surprised me how it glossed over various claims. For example, a small sensor was deemed to be worse than a big sensor without reference to pixel size. The notion that one had to apply gain and thus multiply the noise without considering the light level or pixel size. No mention of well capacity.

Probably some of this stuff is true, some is false, but the whole article doesn’t explain the way we should do here. Also, this kind of vague stuff:

3.2.7. Tone mapping. A tone map is a 1D LUT that is applied per color channel to adjust the tonal values of the image. Figure 10 shows an example. Tone mapping serves two purposes. First, combined with color manipulation, it adjusts the image’s aesthetic appeal, often by increasing the contrast. Second, the final output image is usually only 8 to 10 bits per channel (i.e., 256 or 1,024 tonal values), while the raw-RGB sensor represents a pixel’s digital value using 10–14 bits (i.e., 1,024 up to 16,384 tonal values). As a result, it is necessary to compress the tonal values from the wider tonal range to a tighter range via tone mapping. This adjustment is reminiscent of the human eye’s adaptation to scene brightness (Land 1974). Figure 11 shows a typical 1D LUT used for tone mapping. ```