-
Notifications
You must be signed in to change notification settings - Fork 401
Benchmarking in Wilson
Microsoft.IdentityModel.Benchmarks uses a blend of BenchmarkDotNet and Crank for benchmarking and performance testing of select IdentityModel functionality. There are multiple classes that define the benchmarks that can be run.
If the project is run as a console app, BenchmarkDotNet builds and outputs this test project into a temporary working directory. Then it creates a separate process where all the benchmarking measurements are done. In IdentityModel's case, a configuration file is used to describe to Crank what work to do. Then, that job is completed on a remote machine with specific hardware, making the results more reliable and consistent.
There are multiple ways to run the tests.
Non-CLI:
- Build Microsoft.Client.Test.Performance in
Release
mode. - Navigate to
{project directory}/bin/Release/{framework directory}/
and run the project executable. - The results will be printed to the terminal.
Using the command line:
-
cd
into the project directory. - Run
dotnet run -c Release
.
The BenchmarkDotNet.Artifacts
folder with the exported results will be created in the directory from which the benchmark was run.
The test project can be run multiple times using the methods above and then the results aggregated manually. Another way to run the project multiple times is to add WithLaunchCount(this Job job, int count)
method in Program.cs
when setting up the BenchmarkDotNet job. This will specify how many times the BenchmarkDotNet will launch the benchmark process. This can help reduce the variability between the test runs.
Please read the Crank documentation before getting started. Within the Microsoft.IdentityModel.Benchmarks
project are .yml
configuration files that help to connect the benchmarking apps to Crank commands, as well as test pull requests for performance changes. Because Crank runs the benchmarks on a remote machine with specific hardware, there is very little variability between test runs. Go to the Crank command-line reference and/or the Crank Pull Request Bot reference for help.
Examples:
crank --config .\identitymodel.benchmarks.yaml --scenario ValidateTokenAsyncTests --profile windows --json C:\temp\results0.json
crank --config .\identitymodel.benchmarks.yaml --scenario CreateTokenTests --profile windows --json C:\temp\results1.json
crank-pr --config prbenchmarks.identitymodel.config.yml --pull-request <link-to-PR> --benchmarks NoMvcAuth --profiles windows --components jwt --publish-results true
When making code changes to code that may impact performance, make sure to run the tests to check for regressions. The Crank PR bot helps with this as it outputs a comparison table of the performance with your changes and the baseline (code without your changes). To compare results from microbenchmarks, use the following command:
crank compare <file0> <file1>
All of the tables that Crank outputs are markdown-friendly, so they can be copied directly into GitHub.
Sample table with summary results:
Method | IterationCount | Mean | Error | StdDev |
---|---|---|---|---|
JsonWebTokenHandler_CreateToken | Default | 586,869.2 ns | 3,206.71 ns | 2,999.55 ns |
JsonWebTokenHandler_ValidateTokenAsync | 15 | 578.2 ns | 39.37 ns | 57.71 ns |
JwtSecurityTokenHandler_ValidateTokenAsync | 15 | 6,909.0 ns | 82.56 ns | 113.01 ns |
Results are consolidated across all the iterations and launches. They are written to the console at the end of the run and also exported into .md
, .csv
, and .html
files in BenchmarkDotNet.Artifacts
folder by default. The results are grouped by the benchmark method and any parameters. The main metrics to pay attention to are mean speed and allocated memory, when applicable. Compare these values across runs, before and after code changes. The run log, which contains how many times benchmarks were executed and general debug information, is also exported into the same folder.
Conceptual Documentation
- Using TokenValidationParameters.ValidateIssuerSigningKey
- Scenarios
- Validating tokens
- Outbound policy claim type mapping
- How ASP.NET Core uses Microsoft.IdentityModel extensions for .NET
- Using a custom CryptoProvider
- SignedHttpRequest aka PoP (Proof-of-Possession)
- Creating and Validating JWEs (Json Web Encryptions)
- Caching in Microsoft.IdentityModel
- Resiliency on metadata refresh
- Use KeyVault extensions
- Signing key roll over