load function

I agree with you but in the load function documentation I can read this:

x,y = load('test.dat') # data in two columns

so if I interpret this correctly that means a data file like:

1 2

and not a file like:

1 1 1 1 1
2 2 2 2 2

it's why I suggest to add the transpose in the load function.

For the second things (the columns). Your example show that you can read two columns consecutive but if you want read the columns 3,5,7,8 that can't work. Like it's only an optionnal argument I was thinking that was possible to add this possibilty in the load function inside pylab.

For the comments for the beggining of an array indexing I know it :slight_smile: but I hate it (it's just a tast :slight_smile: ) and I prefer to name/count the column in a data file from 1 so instead to tell to someone: "Use the zero columns" I prefer to tell "Use the first columns". It's only a matter of choice. But it's why I add: array(columns)-1.

Nicolas

Stephen Walton wrote:

···

I don't think the load function needs to be changed in the way you suggest. The "problem" is not the load function. It is the fact that numarray arrays are stored in row-major, not column-major, order, and so tuple unpacking a numarray array goes by row, not by column.

In [15]: A=arange(6)

In [16]: A.shape=(3,2)

In [17]: x,y=A
---------------------------------------------------------------------------

exceptions.ValueError Traceback (most recent call last)

ValueError: too many values to unpack

The transpose is required here as well if you want to unpack by columns. If I have a file containing 731 rows and 17 columns and use 'load', I get an array with 731 rows and 17 columns, exactly as I expect.

I'm far from a Python expert myself :wink: , but you can do what you're trying with the single line

x,y=transpose(load('toto.dat')[:,1:3])

(note that array indexing in Python is zero-based, not one-based, and also read up on how slices work).

Humufr wrote:

I agree with you but in the load function documentation I can read this:

x,y = load('test.dat') # data in two columns

The documentation for load is correct. Consider

A=load('test.dat')

If 'test.dat' has 17 rows and 2 columns, A.shape will be (17,2), "print A" will print an array with 17 rows and 2 columns, and so on. But

x,y=A

will not work, because tuple unpacking of numarray arrays goes by rows, not by columns.

Python is not MATLAB!

Humufr wrote:

I agree with you but in the load function documentation I can read this:
and not a file like:

1 1 1 1 1
2 2 2 2 2

it's why I suggest to add the transpose in the load function.

However, Matlab does exactly this, for the same reason. I always thought that was stupid, but a goal of pylab is to be matlab compatible, so it should probably not be transposed automatically.

For the comments for the beggining of an array indexing I know it :slight_smile: but I hate it (it's just a tast :slight_smile: )

You may come to love it. I know I do. While indexing from 1 seems most natural at first, it results in ugly arithmetic when slicing. I came from Matlab, and python's indexing seemed ugly at first, but then I found that so many thing work much more naturally:

len(a[i:j]) = j-i
len(s[-3:]) = 3

l[i:j] + l[j:k] = l[i:k]

You'd be adding and subtracting a lot of ones if python had one-based indexing.

Also, if you have a grid, spaced out by DeltaX:

The X -coord of a[i] is:
X0 + i*DeltaX

With one based indexing, it would be:
X0 + (i-1) * DeltaX

But most of all, Python indexes from 0, whether you like it or not, so it's probably best to stick with that in Python functions.

-Chris

···

--
Christopher Barker, Ph.D.
Oceanographer
                                         
NOAA/OR&R/HAZMAT (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

Chris.Barker@...259...