Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More benchmarks for the node:test module. #55723

Open
6 of 31 tasks
RedYetiDev opened this issue Nov 5, 2024 · 9 comments
Open
6 of 31 tasks

More benchmarks for the node:test module. #55723

RedYetiDev opened this issue Nov 5, 2024 · 9 comments
Labels
benchmark Issues and PRs related to the benchmark subsystem. test_runner Issues and PRs related to the test runner subsystem.

Comments

@RedYetiDev
Copy link
Member

RedYetiDev commented Nov 5, 2024

The benchmark/test_runner folder currently contains benchmarks for it and describe functions. I suggest we expand these benchmarks to cover additional test runner features, including mocks, coverage, and various test modes.

Here are the functions that (IMO) should be benchmarked:

Basic Testing

These tests should run with a custom reporter without any special logic to make the tests as accurate as possible.

  • test
    • Create a test
    • Create a test when it's not running due to only
    • Add a subtest
    • Skip a test
      • Via skip: true
      • Via t.skip()
      • Via t.skip(...)
    • TODO tests
      • Via todo: true
      • Via t.todo()
      • Via t.todo(...)

Hooks

  • beforeEach
  • afterEach
  • before
  • after

Reporters (#55757)

  • dot
  • junit
  • spec
  • tap
  • lcov

Mocking

Snapshots

  • snapshot.setDefaultSnapshotSerializers(serializers)
  • snapshot.setResolveSnapshotPath(fn)
  • t.assert.snapshot

Coverage

Use --expose-internals to exclusively test the coverage part

  • Basic
  • Excluding files
  • Including files
  • With source maps
@RedYetiDev RedYetiDev added benchmark Issues and PRs related to the benchmark subsystem. test_runner Issues and PRs related to the test runner subsystem. labels Nov 5, 2024
@cu8code
Copy link

cu8code commented Nov 5, 2024

@RedYetiDev I'd love to work on this! Is there a guide or info page on how to create benchmarks? The issue also feels a bit vague — could we get an explicit list of functions to test?

@RedYetiDev
Copy link
Member Author

RedYetiDev commented Nov 5, 2024

IMHO we should start off simple.

How long does it take for a basic test to execute? What about a skipped test? A failing test? Etc.

For this to be as accurate as possible, we would probably need the benchmark to contain a custom reporter.

We could probably replace the current (limited) benchmark with this better idea I described above


From there, we could move on how long it takes specific parts, such as mocks, etc.

@pmarchini
Copy link
Member

I was planning to start something related to coverage to enhance performance, and to do so, benchmarking is definitely needed. It would be great to use this issue to discuss and select a way forward(possibly with a division of the features to cover).

Do you have any ideas on how you would structure this? I think it's non-trivial to benchmark the test runner and coverage with precision, and I'm not sure if we already have anything similar in place.

@RedYetiDev
Copy link
Member Author

My idea for coverage is to use the --expose-internals flag. We can calculate different forms of coverage, and just test the coverage function.

(I'll update the PR with an explicit list of functions/components later today–it's not a quick process)

@RedYetiDev
Copy link
Member Author

RedYetiDev commented Nov 5, 2024

I've added the list. If you think something else should be added, feel free to edit the issue

nodejs-github-bot pushed a commit that referenced this issue Nov 11, 2024
PR-URL: #55771
Refs: #55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Chemi Atlow <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
nodejs-github-bot pushed a commit that referenced this issue Nov 16, 2024
PR-URL: #55757
Refs: #55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
Reviewed-By: Raz Luvaton <[email protected]>
@cu8code
Copy link

cu8code commented Nov 16, 2024

@RedYetiDev, I'm not sure if I should open an issue for this, but I was attempting to create a benchmark for mock.module, and it seems like the functionality is broken. When I try to create a simple mock module, it throws an error saying: Cannot find package which it should not because I am trying to create a mock module at the first place!

benchmark
Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'axios' imported from /home/ankan/Documents/git/me/node/benchmark/test_runner/mock-module.js
    at Object.getPackageJSONURL (node:internal/modules/package_json_reader:267:9)
    at packageResolve (node:internal/modules/esm/resolve:768:81)
    at moduleResolve (node:internal/modules/esm/resolve:854:18)
    at defaultResolve (node:internal/modules/esm/resolve:984:11)
    at nextResolve (node:internal/modules/esm/hooks:748:28)
    at resolve (node:internal/test_runner/mock/loader:78:35)
    at nextResolve (node:internal/modules/esm/hooks:748:28)
    at Hooks.resolve (node:internal/modules/esm/hooks:240:30)
    at handleMessage (node:internal/modules/esm/worker:199:24)
    at Immediate.checkForMessages (node:internal/modules/esm/worker:141:28) {
  code: 'ERR_MODULE_NOT_FOUND'
}
end
✔ <anonymous> (42.524333ms)
(node:91728) ExperimentalWarning: Module mocking is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
ℹ tests 1
ℹ suites 0
ℹ pass 1
ℹ fail 0
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 48.749785

I'm running the file using:

./node --experimental-test-module-mocks benchmark/test_runner/mock-module.js

Here’s the script I’m using:

"use strict";

const { test } = require("node:test");

function main() {
  test(async (t) => {
    console.log("benchmark");

    try {
      // Create a mock module
      t.mock.module('axios', {
        namedExports: {
          get: (url) => url,
        },
      });
    } catch (e) {
      console.error(e);
    }

    console.log("end");
  });
}

main();

I plan to open an issue but wanted to check with you first to confirm if this is a consistent problem or a mistake on my part. Let me know your thoughts.

@RedYetiDev
Copy link
Member Author

Any issues should be opened separately, and from there, if it's an issue, it'll be evaluated

@cjihrig
Copy link
Contributor

cjihrig commented Nov 16, 2024

My guess is that we may need additional logic for mocking modules that don't actually exist.

@RedYetiDev
Copy link
Member Author

Possibly, but that should be tracked seperately from this.

aduh95 pushed a commit that referenced this issue Nov 16, 2024
PR-URL: #55771
Refs: #55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Chemi Atlow <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
aduh95 pushed a commit that referenced this issue Nov 16, 2024
PR-URL: #55757
Refs: #55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
Reviewed-By: Raz Luvaton <[email protected]>
tpoisseau pushed a commit to tpoisseau/node that referenced this issue Nov 21, 2024
PR-URL: nodejs#55771
Refs: nodejs#55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Chemi Atlow <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
tpoisseau pushed a commit to tpoisseau/node that referenced this issue Nov 21, 2024
PR-URL: nodejs#55757
Refs: nodejs#55723
Reviewed-By: Pietro Marchini <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
Reviewed-By: Raz Luvaton <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem. test_runner Issues and PRs related to the test runner subsystem.
Projects
None yet
Development

No branches or pull requests

4 participants