John Hunter wrote:
I was able to run the buildbot mac script when logged into sage with::
So it seems the bus error on Mac is due to networking (DNS lookups)
being broken in non-interactive logins. This is a pain for the
get_sample_data() approach. (Although I suspect we could work around it
by giving the IP address of the svn repo just like we did for the main
MPL checkout. In this case, however, the IP address would be hardcoded
into cbook.)
I am not sure I want to distribute the baseline images with the main
mpl distribution, but I am open to considering it. As the number of
tests and baseline images grows, which hopefully will happen soon,
this could potentially become large -- the reason I added
get_sampledata in the first place was to get the distribution size
down. We could add support to get_sampledata to use an environment
variable for the cache directory. Then I could do an svn checkout of
the sample_data tree and svn up this dir in my buildbot script. If I
point the sample data environment var to this directory, it would have
the latest data for the buildbot and would not need to make an http
request (it is odd though that svn checkouts on the sage buildbot work
fine even when not interactively logged in but http requests
apparently do not).
After letting the implications settle a bit, I think I'm in favor of
baseline images living in the matplotlib svn trunk so that they're
always in sync with the test scripts and available to those who have
done svn checkouts. Another important consideration is that this
approach plays will with branching the repo.
Just because they'd be in the main repository directory, though, doesn't
mean that we have to ship source or binaries with them in place --
that's a decision that could be discussed when release day gets closer.
Many of these images will be .pngs with large regions of white, so
they're relatively small files. But, I agree, hopefully there will be a
lot of tests and thus a lot of images, which will add up. As far as the
linux packaging goes -- the packagers can decide how to ship their own
binaries, but I'm sure they'd appreciate a mechanism for shipping the
test image data separately from the main binary package. This could
cause us to come up with a nice mechanism which we enable when building
Mac and Windows binary packages. As for the source packages, I think I'd
tend toward including the test images for more or less the same reasons
as including them in the svn trunk.
Also, we could set it up such that we skip image_comparison tests if the
baseline images weren't available (or simply not compare the results).
If you think the sample_data w/ support for local svn checkouts is the
way to go for the baseline data and images, let me know. I would like
to utilize a subdir, eg, sample_data/baseline, if we go this route, to
keep the top-level directory a bit cleaner for user data. We could
also release a tarball of the sample_data/baseline directory with each
release, so people who want to untar, set the environment var and test
could do so.
OK, I will move them to a new subdir if we decide to keep the
sample_data approach. I thought I read a preference to keep sample_data
flat, and I wasn't sure about Windows path names.
I am not sure this is the right approach by any means, just putting it
up for consideration. One disadvantage of the sample_data approach is
that it would probably work well with HEAD but not with releases,
because as the baseline images changes, it becomes difficult to test
existing releases against it, which may be assuming a prior baseline.
This is why I mentioned releasing the baseline images too, but it does
raise the barrier for doing tests.
Likewise, I'm not sure my idea is best, either, but I think it plays
best with version control, which IMO is a substantial benefit.
I should have some time today to play as well. One thing I would like
to do is to continue the clean up on naming conventions to make them
compliant with the coding guide. Thanks for your efforts so far on
this -- one thing left to do here that I can see is to rename the
modules to test_axes.py rather than TestAxes.py, etc..., and to finish
renaming the methods which use the wrong convention, eg
TestAxes.TestAxes.tearDown should be test_axes.TestAxes.tear_down
(module_lower_under.ClassMixedUpper.method_lower_under).
I think we should forget about subclassing unittest.TestCase and simply
use flat functions as our tests. In particular, we should drop setUp and
tearDown altogether (which IIRC have to be named what they are because
they're a subclass of unittest.TestCase and override baseclass methods).
If we need any setup and teardown functionality, let's make a new
decorator to support it.
Then, I think the tests in test/test_matplotlib/TestAxes.py should go
into lib/matplotlib/tests/test_axes.py and each test should be it's own
function at module level, such as test_empty_datetime().
-Andrew