Wed, 24 Feb 2010
It is not very often when there is a software bug which is present in nearly all different implementations which do not even have common ancestor in terms of source code.
The image scaling bug is one of these exceptions. I wonder how many programs simply assume that the luminosity of the pixel created as a combination of the two pixels with luminosities of 0 and 255 (e.g. by downscaling the image) is somewhere around 128.
There are definitely several programs written by yours truly, which are built around this assumption. Altough I remember reading the NetPBM source code and seeing those odd calculations using a lookup table and wondering why they did not simply use the arithmetic mean.
I even think (but my memory is fading, so no strong statement here) that we used the arithmetic mean even in the computer graphics course during my studies.