Aliasing
Aliasing refers to the jagged appearance of diagonal lines, edges of circles, etc. due to the square nature of pixels, the building blocks of digital images.
Term Normal View (1X) Enlarged View (4X) Comment
Aliased Steps or "jaggies" are visible, especially when magnifying the image.
Anti-aliased Anti-aliasing makes the edges look much smoother at normal magnifications.
Anti-aliasing
Anti-aliasing makes the edges appear much smoother by averaging out the pixels around the edge. In this example some blue is added to the yellow edge pixels and some yellow is added to the blue edge pixels, thereby making the transition between the yellow circle and the blue background more gradual and smooth. Most image editing software packages have "anti-aliasing" options for typing fonts, drawing lines and shapes, making selections, etc. Anti-aliasing also occurs naturally in digital camera images and smoothens out the "jaggies".
Artifacts
Artifacts refer to a range of undesirable changes to a digital image caused by the sensor, optics, and internal image processing algorithms of the camera. The table below lists some of the common digital imaging artifacts and links to the corresponding glossary items.
Blooming Maze Artifacts
Chromatic Aberrations Moiré
Jaggies Noise
JPEG Compression Sharpening Halos
Color Spaces
The Additive RGB Colors
The cone-shaped cells inside our eyes are sensitive to red, green, and blue. We perceive all other colors as combinations of these three colors. Computer monitors emit a mix of red, green, and blue light to generate various colors. For instance, combining the red and green "additive primaries" will generate yellow. The animation below shows that if adjacent red and green lines (or dots) on a monitor are small enough, their combination will be perceived as yellow. Combining all additive primaries will generate white.
The Additive RGB Color Space
The Subtractive CMYk Colors
A print emits light indirectly by reflecting light that falls upon it. For instance, a page printed in yellow absorbs (subtracts) the blue component of white light and reflects the remaining red and green components, thereby creating a similar effect as a monitor emitting red and green light. Printers mix Cyan, Magenta, and Yellow ink to create all other colors. Combining these subtractive primaries will generate black, but in practice black ink is used, hence the term "CMYk" color space, with k standing for the last character of black.
The Subtractive CMYk Color Space
The LAB and Adobe RGB (1998) Color Spaces
Due to technical limitations, monitors and printers are unable to reproduce all the colors we can see with our eyes, also called the "LAB" color space, symbolized by the horseshoe shape in the diagram below. The group of colors an average computer monitor can replicate is called the (additive) sRGB color space. The group of colors a printer can generate is called the (subtractive) CMYk color space. There are many types of CMYk, depending on the device. From the diagram you can see that certain colors are not visible on an average computer monitor but printable by a printer and vice versa. Higher-end digital cameras allow you to shoot in Adobe RGB (1998), which is larger than sRGB and CMYk. This will allow for prints with a wider range of colors. However, most monitors are only able to display colors within sRGB.
Compression
Image files can be compressed in two ways: lossless and lossy.
Lossless Compression
Lossless compression is similar to what WinZip does. For instance, if you compress a document into a ZIP file and later extract and open the document, the content will of course be identical to the original. No information is lost in the process. Only some processing time was required to compress and decompress the document. TIFF is an image format that can be compressed in a lossless way
Lossy Compression
Lossy compression reduces the image size by discarding information and is similar to summarizing a document. For example, you can summarize a 10 page document into a 9 page or 1 page document that represents the original, but you cannot create the original out of the summary as information was discarded during summarization. JPEG is an image format that is based on lossy compression.
Dynamic Range
The dynamic range of a sensor is defined by the largest possible signal divided by the smallest possible signal it can generate. The largest possible signal is directly proportional to the full well capacity of the pixel. The lowest signal is the noise level when the sensor is not exposed to any light, also called the "noise floor". Practically, cameras with a large dynamic range are able to capture shadow detail and highlight detail at the same time. Dynamic range should not be confused with tonal range.
Dynamic Range of an Image
When shooting in JPEG, the rather contrasty tonal curves applied by the camera may clip shadow and highlight detail which was present in the RAW data. RAW images preserve the dynamic range of the sensor and allow you to compress the dynamic range and tonal range by applying a proper tonal curve so that the whole dynamic range is represented on a monitor or print in a way that is pleasing to the eye. This is similar to the more extreme example in the tonal range topic which shows how the larger dynamic range and tonal range of a 32 bit floating point image were compressed.
Pixel Size and Dynamic Range
We learned earlier that a digital camera sensor has millions of pixels collecting photons during the exposure of the sensor. You could compare this process to millions of tiny buckets collecting rain water. The brighter the captured area, the more photons are collected. After the exposure, the level of each bucket is assigned a discrete value as is explained in the analog to digital conversion topic. Empty and full buckets are assigned values of "0" and "255" respectively, and represent pure black and pure white, as perceived by the sensor. The conceptual sensor below has only 16 pixels. Those pixels which capture the bright parts of the scene get filled up very quickly
Once they are full, they overflow (this can also cause blooming). What flows over gets lost, as indicated in red, and the values of these buckets all become 255, while they actually should have been different. In other words, detail is lost. This causes "clipped highlights" as explained in the histogram section. On the other hand, if you reduce the exposure time to prevent further highlight clipping, as we did in the above example, then many of the pixels which correspond to the darker areas of the scene may not have had enough time to capture any photons and might still have value zero (hence the term "clipped shadows" as all the values are zero, while in reality there might be minor differences).
One of the reasons that digital SLRs have a larger dynamic range is that their sensors have larger pixels. All things equal (in particular fill factor, "bucket" depth, and exposure time), pixels with a larger exposed surface can collect more photons in the shadow areas than small pixels during the exposure time that is needed to prevent the bright pixels from overflowing.
It is easy to understand that one of the reasons digital SLRs have a larger dynamic range is that their pixels are larger. Larger pixels can collect more photons in the shadow areas before the bright ones start to overflow.
Some Dynamic Range Examples
The dynamic range of the camera was able to capture the dynamic range of the scene. The histogram indicates that both shadow and highlight detail is captured.
Here the dynamic range of the camera was smaller than the dynamic range of the scene. The histogram indicates that some shadow and highlight detail is lost.
The limited dynamic range of this camera was used to capture highlight detail at the expense of shadow detail. The short exposure needed to prevent the highlight buckets from overflowing gave some of the shadow buckets insufficient time to capture any photons.
The limited dynamic range of this camera was used to capture shadow detail at the expense of highlight detail. The long exposure needed by the shadow buckets to collect sufficient photons resulted in overflowing of some of the highlight buckets.
Here the dynamic range of the scene is smaller than the dynamic range of the camera, typical when shooting images from an airplane. The histogram can be stretched to cover the whole tonal range with a more contrasty image as a result, but posterization can occur.
The dynamic range of the camera was able to capture the dynamic range of the scene. The histogram indicates that both shadow and highlight detail is captured.
Here the dynamic range of the camera was smaller than the dynamic range of the scene. The histogram indicates that some shadow and highlight detail is lost.
The limited dynamic range of this camera was used to capture highlight detail at the expense of shadow detail. The short exposure needed to prevent the highlight buckets from overflowing gave some of the shadow buckets insufficient time to capture any photons.
The limited dynamic range of this camera was used to capture shadow detail at the expense of highlight detail. The long exposure needed by the shadow buckets to collect sufficient photons resulted in overflowing of some of the highlight buckets.
Here the dynamic range of the scene is smaller than the dynamic range of the camera, typical when shooting images from an airplane. The histogram can be stretched to cover the whole tonal range with a more contrasty image as a result, but posterization can occur.