I agree with you but in the load function documentation I can read this:
x,y = load('test.dat') # data in two columns
so if I interpret this correctly that means a data file like:
1 2
and not a file like:
1 1 1 1 1
2 2 2 2 2
it's why I suggest to add the transpose in the load function.
For the second things (the columns). Your example show that you can read two columns consecutive but if you want read the columns 3,5,7,8 that can't work. Like it's only an optionnal argument I was thinking that was possible to add this possibilty in the load function inside pylab.
For the comments for the beggining of an array indexing I know it but I hate it (it's just a tast ) and I prefer to name/count the column in a data file from 1 so instead to tell to someone: "Use the zero columns" I prefer to tell "Use the first columns". It's only a matter of choice. But it's why I add: array(columns)-1.
Nicolas
Stephen Walton wrote:
···
I don't think the load function needs to be changed in the way you suggest. The "problem" is not the load function. It is the fact that numarray arrays are stored in row-major, not column-major, order, and so tuple unpacking a numarray array goes by row, not by column.
In [15]: A=arange(6)
In [16]: A.shape=(3,2)
In [17]: x,y=A
---------------------------------------------------------------------------exceptions.ValueError Traceback (most recent call last)
ValueError: too many values to unpack
The transpose is required here as well if you want to unpack by columns. If I have a file containing 731 rows and 17 columns and use 'load', I get an array with 731 rows and 17 columns, exactly as I expect.
I'm far from a Python expert myself , but you can do what you're trying with the single line
x,y=transpose(load('toto.dat')[:,1:3])
(note that array indexing in Python is zero-based, not one-based, and also read up on how slices work).