Wed, 24 Feb 2010
It is not very often when there is a software bug which is present in nearly all different implementations which do not even have common ancestor in terms of source code.
The image scaling bug is one of these exceptions. I wonder how many programs simply assume that the luminosity of the pixel created as a combination of the two pixels with luminosities of 0 and 255 (e.g. by downscaling the image) is somewhere around 128.
There are definitely several programs written by yours truly, which are built around this assumption. Altough I remember reading the NetPBM source code and seeing those odd calculations using a lookup table and wondering why they did not simply use the arithmetic mean.
I even think (but my memory is fading, so no strong statement here) that we used the arithmetic mean even in the computer graphics course during my studies.
2 replies for this story:
I'm not sure what luminosity means. There are several terms used to measure "brightness" (in the loose sense of the word), some of them refers to gamma-compressed values, some to linear ones and some are used ambiguously. Anyway, in computer graphics classes, brightness is usually represented as a (non-compressed) real number, so arithmetic mean is correct.
Thanks for sharing this post. This is a very helpful and informative material. Good post and keep it up. Websites are always helpful in one way or the other, that’s cool stuff, anyways, a good way to get started to renovate your dreams into the world of reality.I will write more in detail after my mcp very soon. Thanks!