New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python3Packages.fpylll: fix arch tests #75796
Conversation
@GrahamcOfBorg build python3Packages.fpylll |
Is there some good reason to just ignore all doctests outright? |
Most of the time, I find that doc tests are more integration-like or demonstrate some scenario, which don't usually jive well with sandbox environments. Besides, the point of the tests are demonstrate the package works correctly, and the failing tests from the comment show that they are being overly specific about floating point percision and other minor details. |
I think integration tests are often the more useful kind for distro purposes. Out of the failing tests just one actually tests a number to a too-high precision. The others test a function that is supposed to determine or set the precision, so that is either a real failure or a wrong assumption about python data types. I'd be careful when disabling a subset of the tests. Who knows, maybe in the future most new tests will be doctests. Once we have disabled them, its unlikely that we'll re-enable them in the future. I think it would be better to disable the tests conditionally (either just those failing tests, or all tests on aarch64). |
those doctests look to just demonstrate the package itself, but I'm familiar with machine learning notebooks which will bring in stuff like tensorflow, pytorch, scikit-learn, and other monoliths which the complexity is not trivial. The analogous validation in pypi is running setuptools.setup(), then dumping stuff in site-packages :/ EDIT: more fairly worded. The ownership of correctness is more on the package maintainer that their package's setup.py correctly reflects which package versions they are compatible with. And the package installer is not responsible for doing any validation. |
@timokau are you okay with merging this? |
But as long as this package doesn't do that it shouldn't be an issue. If its a machine learning notebook, it will probably depend on tensorflow or one of its alternatives anyways.
Not sure what you mean by that.
I would much prefer to only ignore the tests in question or at least somehow make sure we re-enable the tests on the next update (a comment at a noticeable location would be enough). |
doing a
The unit tests run and pass, for me, this gives high confidence that the python package is usable in every scenario the authors intended it to be usable in. |
Right, but that installs fpylll in a very similar context in which upstream has (hopefully) tested it. The context on NixOS may differ.
My point is that you can't rely on all the "juicy" tests be unit tests. For example in sagemath, pretty much everything is a doctest. They are very simple to write during development. If now in the future most new tests in fpylll would be unit tests, we'd skip over a significant part of the testsuite. But since arguing in hypotheticals is not really helpful, we could ask upstream. @malb, do you think fpylll's doctests are important for distribution purposes (validating that the package has been successfully installed and is compatible with its dependency version), or are the unit tests sufficient? |
I'd say doctests are likely going to be the main source of testing, so I'd say they matter. Of course, if it's just some numerical noise then that's not something to worry about, but in general I'd say our developers are more likely to write doctests than to update unit tests. |
Upstream fix in #76176. |
Motivation for this change
related: #74372 (comment)
Things done
sandbox
innix.conf
on non-NixOS linux)nix-shell -p nix-review --run "nix-review wip"
./result/bin/
)nix path-info -S
before and after)Notify maintainers
cc @