thanks a lot for giving me your advices. Please understand it is about
20 years I am from the university so it is not always easy to get
things about maps etc. back to my head
I did study your some example code
and I thing I am able to follow it and understand how you are using it.
Generally it looks for me like you:
a) read the pre-prepared (sorted) scalar field (latitudes, longitudes, values)
b) store it in the three numarrays (topoin, lons , lats)
c) transform coordinates (lons , lats) to the chosen projection (to
the native map projection grid) + interpolate data values (topoin)
to the transformed coordinates.
d) assign color to the interpolated "data values" using colorpalet from
e) plot "new" transformed "color" scalar field over the map
(please correct me, if I am wrong).
Now I am reading your first posting again:
However, you may be able to do it by importing your image using PIL,
converting it to a Numeric array and then plotting it over the map
projection using imshow. To see how to convert an image to and from
a Numeric array see http://effbot.org/zone/pil-numpy.htm
I am trying to think how to convert image (scanned map) to the Numeric array.
My map is in Transverse Mercator projection (this is my intended
target projection as well) and it has WGS84 coordinates (datum) on it.
I just can't figure out, how I can identify the exact pixel, where (at
least) the 3 wgs84 coordinates intersection point are located. I think
I need such a identification so I will be able to assign the
information about position (coordinates) to each image pixel during
the conversion of the image to the Numeric array? Maybe there is some
general function how to "calibrate" the picture(pixels) to the coordinates.
Any idea about it? Or is my approach completely wrong?
Thks and regards