I spent a fair amount of time today debugging what I thought was a bug
in the scaling of psd() when I was using NFFT to specify zero-padding.
This was a mis-use of the code on my part, where I should have been
using pad_to to get zero-padding. Most embarassing about this is that
I'm basically the maintainer of record for this code and responsible
for the most recent changes, part of which improved 0-padding. (> 2
I've come to the conclusion that if I was able to get it wrong, it's
likely that this issue has bitten and will bite other users. The
problem stems from the fact that the code underlying psd (and csd,
etc.) will gladly pad your data to NFFT if len(x) < NFFT. The extra
0's added here throw off the scaling (whereas using pad_to does not).
[Aside for the curious: The NFFT-padding code pre-dates the use of
pad_to for zero-padding and was left so-as not to break code, not
realizing that leaving it was still wrong.] Options:
1) Rip out the code that does the zero-padding based on NFFT and raise
an exception in the case that len(x) < NFFT.
2) Issue a warning on len(x) < NFFT for awhile, and then rip it out.
I'm really tempted to just go with 1), with an exception that details
the problem and the fix. While this would break existing code, this
code is almost 100% guaranteed to be broken and silently producing the
incorrect answer (unless code was already hacking around our
broken-ness, which wouldn't be easy.)
I still need to go through and convince myself that there's not some
use-case where NFFT > len(x) would produce correct and desired
behavior, which isn't covered by pad_to. What I'm looking for is
thoughts from devs on breaking backwards compatibility and from users,
whose code may or may not be broken by such a change.
Graduate Research Assistant
School of Meteorology
University of Oklahoma