Redundancy in digital images refers to unnecessary or repetitive information, such as duplicated patterns, colors, or textures that don’t significantly contribute to the image’s meaning or quality. Detecting and reducing redundancy is vital for optimizing file sizes, efficient data transfer, and improved user experiences, especially when using web pages, where faster loading time is essential.
Redundancy can occur in various forms. Let’s go through these types in detail.
Code redundancy relates to how information is expressed through codes representing data, such as the gray levels within an image. When these codes use excessive
We assign the same number of bits to represent each shade in fixed-length encoding. For instance, let’s use 3 bits for simplicity. If we have four shades of gray, the codes might look like this:
Shade 1: 000
Shade 2: 001
Shade 3: 010
Shade 4: 011
Even if we only need 2 bits to represent these shades, we use 3 bits for every shade to create coding redundancy.
Like Huffman coding, we assign shorter codes to more frequent shades in variable-length encoding. For example:
Shade 1: 0
Shade 2: 10
Shade 3: 110
Shade 4: 111
Here, the more common shades get shorter codes, reducing the number of bits used and minimizing coding redundancy.
Interpixel redundancy is when we can predict a pixel’s value by examining its neighboring pixels. This means that the individual pixels don’t hold much special information. Therefore, a significant part of a single pixel’s importance in an image comes from redundancy because we can often accurately figure it out based on its neighboring pixels.
We can further divide interpixel redundancy into two subtypes.
It is also known as intraframe redundancy and involves repeating similar or identical pixel values within an image. This redundancy occurs when neighboring pixels or regions contain redundant information, such as uniform backgrounds or patterns.
Temporal redundancy, or interframe redundancy, is mostly relevant in video or dynamic images and represents redundant information between a series of similar images. For example, subsequent frames in a video contain similar content.
Psychovisual redundancy is the irrelevant information in images that refers to any data or elements that do not contribute meaningfully to the image’s intended message or quality. This type of redundancy exists because human perception does not involve quantitative analysis of every pixel in the image. It encompasses elements like artifacts, noise, or details that distract from the image’s primary content and may need to be removed or minimized for improved visual clarity and aesthetics.
For example, the human eye is more sensitive to differences between dark intensities than bright ones and differences between different shades of green than red or blue. These considerations of varying sensitivity of the human eye to colors and their intensities contribute to psychovisual redundancies.
The eye cannot know the intensity values of each pixel in an image. See if you can tell the difference between Example 1.1 and Example 1.2.
Note: The examples may have more than one type of redundancy.
Data compression is achieved when one or more redundancies are reduced or eliminated. We can check how much redundancy was present in the original image when compared with the compressed image by using the following ratio:
Let's assume the original image has 256 x 256 pixel, where each pixel is represented by 8 bits. After compressionm the image has size 12248 bytes.
To find out how much redundancy exists in the original image, let's apply the compression ratio.
This means that the original image has almost 5 bytes for every 1 byte in the compressed image.
Recognizing redundancy in digital images is essential for data compression techniques which allows for a smaller size and reduced noise. In video codecs and medical imaging, redundancy reduction is especially important to help perform precise diagnostics and ensure smooth video streaming. Moreover, it also facilitates computer vision and AI models to process images efficiently; in satellite imagery and remote sensing, eliminating redundancies enhances data accuracy. Overall, the ability to identify and manage redundancies in digital images significantly impacts efficiency, cost-effectiveness, and data quality thus offering improved visual data for better decision-making.
Free Resources