-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More benchmarks for the node:test
module.
#55723
Comments
@RedYetiDev I'd love to work on this! Is there a guide or info page on how to create benchmarks? The issue also feels a bit vague — could we get an explicit list of functions to test? |
IMHO we should start off simple. How long does it take for a basic test to execute? What about a skipped test? A failing test? Etc. For this to be as accurate as possible, we would probably need the benchmark to contain a custom reporter. We could probably replace the current (limited) benchmark with this better idea I described above From there, we could move on how long it takes specific parts, such as mocks, etc. |
I was planning to start something related to coverage to enhance performance, and to do so, benchmarking is definitely needed. It would be great to use this issue to discuss and select a way forward(possibly with a division of the features to cover). Do you have any ideas on how you would structure this? I think it's non-trivial to benchmark the test runner and coverage with precision, and I'm not sure if we already have anything similar in place. |
My idea for coverage is to use the (I'll update the PR with an explicit list of functions/components later today–it's not a quick process) |
I've added the list. If you think something else should be added, feel free to edit the issue |
PR-URL: #55771 Refs: #55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Chemi Atlow <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
PR-URL: #55757 Refs: #55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]> Reviewed-By: Raz Luvaton <[email protected]>
@RedYetiDev, I'm not sure if I should open an issue for this, but I was attempting to create a benchmark for
I'm running the file using: ./node --experimental-test-module-mocks benchmark/test_runner/mock-module.js Here’s the script I’m using: "use strict";
const { test } = require("node:test");
function main() {
test(async (t) => {
console.log("benchmark");
try {
// Create a mock module
t.mock.module('axios', {
namedExports: {
get: (url) => url,
},
});
} catch (e) {
console.error(e);
}
console.log("end");
});
}
main(); I plan to open an issue but wanted to check with you first to confirm if this is a consistent problem or a mistake on my part. Let me know your thoughts. |
Any issues should be opened separately, and from there, if it's an issue, it'll be evaluated |
My guess is that we may need additional logic for mocking modules that don't actually exist. |
Possibly, but that should be tracked seperately from this. |
PR-URL: #55771 Refs: #55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Chemi Atlow <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
PR-URL: #55757 Refs: #55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]> Reviewed-By: Raz Luvaton <[email protected]>
PR-URL: nodejs#55771 Refs: nodejs#55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Chemi Atlow <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
PR-URL: nodejs#55757 Refs: nodejs#55723 Reviewed-By: Pietro Marchini <[email protected]> Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]> Reviewed-By: Raz Luvaton <[email protected]>
The
benchmark/test_runner
folder currently contains benchmarks forit
anddescribe
functions. I suggest we expand these benchmarks to cover additional test runner features, including mocks, coverage, and various test modes.Here are the functions that (IMO) should be benchmarked:
Basic Testing
These tests should run with a custom reporter without any special logic to make the tests as accurate as possible.
test
test
test
when it's not running due toonly
skip: true
t.skip()
t.skip(...)
todo: true
t.todo()
t.todo(...)
Hooks
beforeEach
afterEach
before
after
Reporters (#55757)
dot
junit
spec
tap
lcov
Mocking
mock.fn
(benchmark: addtest_runner/mock-fn
#55771)mock.timers
for each API, and each sub-functionmock.module
Snapshots
snapshot.setDefaultSnapshotSerializers(serializers)
snapshot.setResolveSnapshotPath(fn)
t.assert.snapshot
Coverage
Use
--expose-internals
to exclusively test the coverage partThe text was updated successfully, but these errors were encountered: