New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collect code coverage for code in tools/ #22278
Conversation
Added separate targets for each python environment. In addition, we're removing data from wpt/, wpttestrunner/ as they have separate tox.ini files. Test code is also not included in the coverage reports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this will effectively run the tests twice, once with --cov
turned on. Is that necessary? It seems that pytest-cov
was already an unconditional dependency. Shall we perhaps just add --cov
to py*
? cc @jgraham in case there's some historical context I'm missing here.
Right, my point with this PR was also to check why coverage was not collected if we have the unconditional dependency as you mention. I thought about a potential slowness associated to collecting the coverage data but shrug. |
I think it's because the lack of
Sorry I'm not sure I followed. According to the logs, this change does add an additional run (with coverage turned on) for all Python test jobs. Are you concerned about adding |
Sorry for the confusion. The idea would indeed be to add those passes but preventing CI from running them. However we could just add the coverage options to the current command instead of adding new targets and let it run. Should we detect a big slowdown we could revert it. WDYT? |
For example by setting |
I'm collecting some measurements right now about running with and without code coverage analysis, will post them soon. |
So these are the results of running the tools tests 5x (no wpt no wptrunner) with and without code coverage:
I know it's a very small number of samples but the runs with code coverage enabled are consistently 14% slower (~15s) than the ones without it. I think this is a good reason to have separate targets (maybe disabled in CI but available for developers) to run the code coverage analysis. |
So the only historical context I have is that we used to have (iirc) codecov.io enabled, but we didn't work out how to get it to only report on changes in tools so in the end we disabled it to avoid annoying test authors. I think it makes more sense to enable coverage in CI than to have it as an option but not actually run it. Ideally we could figure out how to get one of the services to comment only on tools PRs, but at the very least we could upload the coverage report as an artifact on taskcluster. |
Given the measurement, I'd prefer simply always running with cov on CI, instead of adding more configurations. |
Could you provide some more context to the "upload as an artifact" thing? |
Artifacts are just files that are stored on taskcluster which you can download later. The logs are the most obvious example of this, but we can have custom artifacts. It's a pretty bad solution in this case because just having the data available through the taskcluster UI isn't going to make anyone look at it. We should really check if any of the existing dashboard tools allow us to provide an allow list of directories for which coverage should be reported and ignore PRs that don't touch those directories. |
Now we should make the data available. Should we handle this in a separate PR? I ask because we might want to run the CI with coverage for some time, to check that the extra time required is not a problem. WDYT? |
Do we have a plan for how we would make the data available yet? Doesn't have to be detailed, just a high level idea of the eventual flow would help inform decision making here :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
I think we can sort out the plan for collecting the results later. At least folks working on py3 migration can look at the terminal report for now.
The ideal plan for making the data available is to use an existing service that comments on PRs with coverage information for the afffected files. The necessary trick is to ensure this is only happening for tools PRs and not all PRs. |
Added separate targets for each python environment. In addition, we're removing data from wpt/, wpttestrunner/ as they have separate tox.ini files.
Test code is also not included in the coverage reports.