Back when I was a boy I was obsessed with telescopes, astronomy, and space. I saved my money and bought a three inch Newtonian reflector telescope from Edmund Scientific[1] and subscribed to “Sky and Telescope” magazine. In 1979 the Voyager 1 probe reached Jupiter and I was enthralled with the color pictures produced by the Jet Propulsion Laboratory. I understood that they were created from multiple monochrome images taken with different color filters and I wanted to do that myself.

Computers were not common back then. The high school I went to in the late 1970’s had an HP time-share BASIC computer that replaced my telescope in my affections.[2] There was no way to do real image processing on it but I wrote some simulations, one of which I’m about to describe.

Fast-forward to 1993. The raw Voyager imagery was released on those new-fangled optical CD disks so I bought a set and an external CD-ROM drive to attach to my Dell NL25 notebook computer.[3] The JPL kindly sent me the Voyager camera and filter specifications and I finally wrote a program to create my own color composite images. One of them is this article’s featured image.

Yes, image processing is one of my hobbies but I swear I can give it up any time I want.

Recently I was doing some research on data compression when I realized that it could be used to enhance image contrast in a specific “optimal” fashion. This is actually something I’d been thinking about for some time. I implemented it and during testing made a discovery that prompted me to write this article.

The classic image contrast enhancement algorithm is called Histogram Equalization:

https://en.wikipedia.org/wiki/Histogram_equalization

Once you scroll past the math the Wikipedia article has some example images. Here’s how it works in my own words:

OCCULAR HEALTH WARNING! If your eyes glaze over at computer algorithm descriptions then just skip down to the pretty pictures.

Start with a monochrome image. Each pixel has a value ranging from zero for black to 255 for white. These are “grayscale levels”. The first step is to create census or histogram of each grayscale level. If the image is a tiny 3×3 like this:

118 120 119
120 122 121
121 120 119

Then the census array isn’t going to use the full zero to 255 grayscale range because the grayscale levels are only from 118 to 122. The census is:

[0] = 0
...
[117] = 0
[118] = 1
[119] = 2
[120] = 3
[121] = 2
[122] = 1
[123] = 0
...
[255] = 0

The next step is to create a cumulative census array by adding the census values in sequence. Census values [0] through [117] are zero so the cumulative sums for those values are zero. The first non-zero cumulative value is [118] which is one. The next cumulative value is [119] which is [119] plus [118] or three. The next is [120] which is [120] plus [119] plus [118] or six. The complete cumulative array is:

[0] = 0
...
[117] = 0
[118] = 1
[119] = 3
[120] = 6
[121] = 8
[122] = 9
[123] = 9
...
[255] = 9

Because census values [123] through [255] are zero the cumulative census values for those grayscale values remain nine.

And now for some math.[4] The cumulative range is the cumulative value of [0] to the cumulative value of [255] which is zero to nine. We need to transform this to the grayscale range of zero to 255. This is accomplished by multiplying each cumulative value by 255/9 or 28.33.[5] Since we’re using integers no decimal places are allowed so the resulting values are rounded to the nearest integer. The resulting transformed cumulative array is:

[0] = 0
...
[117] = 0
[118] = 28
[119] = 85
[120] = 170
[121] = 227
[122] = 255
[123] = 255
...
[255] = 255

The image pixel values are then changed by replacing them with the transformed cumulative value. Pixel values of 118 are replaced with [118] or 28. Pixel values of 119 are replaced with [119] or 85, etc. The resulting 3×3 image is:

 28 170  85
170 255 227
227 170  85

The original image has almost no contrast. The grayscale values are all very close to each other. The enhanced image has a great deal of contrast with the histogram equalized grayscale levels going from a very dark 28 to the brightest white of 255.

This is the sort of thing I implemented on my high school’s computer printing the results as ASCII art:

https://en.wikipedia.org/wiki/ASCII_art

Most color images are represented as three “monochrome” images for the red, green, and blue primary colors. This is called RGB. To use Histogram Equalization on an RGB image it’s first necessary to transform the RGB planes into an alternative set of components. One component is the image’s brightness or grayscale and the other components encode the color. One such scheme is called YUV where the Y plane is brightness and the U and V planes represent the color. This is the scheme used by old-fashioned color TVs.

To enhance the contrast of a color image, the RGB planes are converted to YUV, the Y plane is histogram equalized into something I’ll call Y’, and the Y’UV planes are reconverted to RGB.

In my opinion Histogram Equalization produces harsh results. The contrast enhancement is often too extreme. But the algorithm is so simple that there’s no obvious way to tune it. I’d been pondering this as a mental background process for some time when some other research made me realize that I’d invented something I call “Histogram Optimization”.[6] I’m not going to describe it in detail, that would take too long, but once a grayscale census , or histogram, is calculated there’s a way to combine the “bins”, as I call them, to create a new histogram that encodes the original histogram with minimum error.

Here’s a common monochrome test image and the same image processed with Histogram Equalization and Histogram Optimization:

Original

Histogram Equalization

Histogram Optimization

 

Here’s a common color test image:

Original

Histogram Equalization

Histogram Optimization

 

I’m quite pleased with my new algorithm but it’s not revolutionary. There are far better image contrast enhancement algorithms such as those that use Retinex theory:

https://ieeexplore.ieee.org/document/8500743

So why was I inspired to write a Glibs article? I was searching my computer’s hard disk drive for test images when I encountered one I couldn’t resist. I optimized its histogram and was surprised at the result which I reproduce here:

Original

Histogram Optimization

 

There’s practically no difference so I am forced to conclude that Lobster Girl is already optimal.

Footnotes:

[1] Remember them? They’re still in business as Edmund Optics but they don’t sell consumer/educational stuff anymore.

[2] Girls? What are girls? It was a few years after graduation that I belatedly realized there were girls at my high school.[7]

[3] 25MHz 386SL (low power SX) CPU with the optional 387SL FPU and 8MB of RAM! Power!

[4] I never said there would be no math.

[5] That wasn’t so bad, was it?

[6] I doubt that my new algorithm is original but I haven’t found references to anything similar, not that I’ve looked very hard.

[7] And then I was in an engineering school with an 8:1 male:female ratio.