Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for tracking the asserts that ran in testharness.js #27224

Merged
merged 4 commits into from Jan 22, 2021

Conversation

jgraham
Copy link
Contributor

@jgraham jgraham commented Jan 18, 2021

No description provided.

@wpt-pr-bot
Copy link
Collaborator

There are no reviewers for this pull request besides its author. Please reach out on W3C's irc server (irc.w3.org, port 6665) on channel #testing (web client) to get help with this. Thank you!

@jgraham
Copy link
Contributor Author

jgraham commented Jan 18, 2021

I'm looking at options for testing this.

Copy link
Contributor

@stephenmcgruer stephenmcgruer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally I like the idea. I wasn't able to experiment with it locally however as it threw an error on dom/historical.html (see comment).

Have you done any performance profiling? If this is expensive (we have tests with 1000s of subtests, each of which may have many asserts), we may want to make it optional.

resources/testharness.js Outdated Show resolved Hide resolved
resources/testharness.js Show resolved Hide resolved
@jgraham
Copy link
Contributor Author

jgraham commented Jan 19, 2021

Have you done any performance profiling? If this is expensive (we have tests with 1000s of subtests, each of which may have many asserts), we may want to make it optional.

I made it only happen when we have output enabled since it's purely informative at this point and not part of the API. In a test with 17,000 subtests on my machine we spent about 1s extra running this (mostly formatting the arguments), so that's not quite trivial (although it is an extreme case).

This keeps a global list of all the asserts that are triggered along
with details about the argument types and so on. We could make this
per test, but the idea is that keeping the list globally provides
additional informataion about ordering that might be relevant in some
cases.

Since this may have some perf impact and is only useful for debugging,
and not part of the API, we only collect this data when visual output
is enabled.

To keep the arguments we are currently formatting as a string. This
isn't terribly efficient, but the concern is that keeping the actual
object alive will increase memory usage and may affect tests. Also the
formatting itself could possibly be improved.
This adds a <details> element under each test message with the asserts
that ran, their status, and the line where they were called.
Copy link
Contributor

@stephenmcgruer stephenmcgruer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made it only happen when we have output enabled since it's purely informative at this point and not part of the API. In a test with 17,000 subtests on my machine we spent about 1s extra running this (mostly formatting the arguments), so that's not quite trivial (although it is an extreme case).

To me that's pretty trivial (given how extreme that case is); I suspect the extra 1s in that case is a small percent overhead.

I love this PR, so much. It LGTM as is (though see my one architectural comment on whether to associate asserts with a test object proactively), but I would like to see tests before I approve ;)

PS: https://web-platform-tests.org/writing-tests/testharness-api.html?highlight=add_completion_callback#callback-api will need updated

@@ -3395,6 +3486,51 @@ policies and contribution forms [3].
return '';
}

var asserts_run_by_test = new Map();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it would be nicer to associate asserts with a Test right from the start? (I.e. instead of pushing them into a separate array). Right now every user of completion callback will have to write code to turn the entire assert array into a test--> assert list map.

The only question might be what to do in set_assert when current_test is null, which here is neatly handled because Map allows that. Of course maybe its a good chance to raise a harness error and be like "y'all missed a step call!"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't do that because in theory the ordering of asserts between tests could be significant. We don't actually expose that at the moment, but we could. We could store that in some other way of course e.g. by giving each assert an index. idk if you think that's valuable enough to retain the more complex data model.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably fine then. If we ever decide otherwise there's a fairly easy deprecation path (add to Test as well, then later remove the third argument)

@jgraham
Copy link
Contributor Author

jgraham commented Jan 21, 2021

I've added a small amount of testing based on the existing framework. Do you think that approach is good enough? It doesn't really check the output, but does check that we manage to push the relevant stuff to the asserts array.

Copy link
Contributor

@stephenmcgruer stephenmcgruer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, testing the API portion is what interests me :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants