visually it will be hardly noticeable in most cases. However, I'd expect the histogram of normalized intensity data to be the same as the histogram of a linear grayscale image of that data (neglecting gamma correction, image scaling/interpolation for now). Consider this code for example:
import numpy as np
a = np.random.rand(1024*1024)
a, a[-1] = 0.0, 1.0
h0 = np.histogram(a, bins=256, range=(0, 1))
h1 = np.bincount(np.uint8(a * 255))
h2 = np.bincount(np.uint8(a * 255.9999999999999))
print (h0 - h1)
print (h0 - h2)
On 9/18/2011 2:30 PM, Eric Firing wrote:
On 09/18/2011 09:30 AM, Christoph Gohlke wrote:
matplotlib uses int(x*255) or np.array(x*255, np.uint8) to quantize
normalized floating point numbers x in the range [0.0 to 1.0] to
integers in the range [0 to 255]. This way only 1.0 is mapped to 255,
not for example 0.999. Is this really intended or would not the largest
floating point number below 256.0 be a better scale factor than 255? The
exact factor depends on the floating point precision (~255.999992 for
np.float32, ~255.93 for np.float16).
It's a reasonable question; but do you have use cases in mind where it
actually makes a difference?
The simple scaling with truncation is used in many places, both in the
python and the c++ code.