On image histograms and code

I’m currently working on a long term software project that I’m mostly trying to keep under my hat, though a couple of other photographers are aware of it. I’ve reached a stage where I think it would be beneficial to chew the cud with someone who understands very well the concept of black and white points and how they relate to image histograms. On the surface this seems relatively simple but I think the truth is far more complex.

I need to break through a couple of issues that I’m having. Firstly I would like to discover exactly what the big players (Adobe, Nikon etc) do to raw histogram data before rendering it as a plotted graph. I’ve come to the conclusion that they do not simply plot the data values when rendering a particular colour channel or composite RGB histogram. I’m thinking they first perform some data smoothing or apply some other algorithm to reduce data spikes and perhaps other things. The histograms I am producing are often the same as those produced the big players, sometimes similar and sometimes quite different and my histograms are produced from the raw image data so something else is going on after the gathering of the data and before the plot.

Secondly I need to find out a little more about the actual meaning of Black and White points in relation to raw histogram or image data. I am assuming that the concept of Black and White points relates directly to the luminosity histogram and I would just like to have this confirmed.

If any readers can help and are willing to chat either over the phone or online I would be very grateful.

UPDATE: Well, the RAW vs Smoothed histogram issue seems to be solved or at least improved significantly.

Comments are closed.