-
Notifications
You must be signed in to change notification settings - Fork 401
Benchmarking in Wilson
Microsoft.IdentityModel.Benchmarks uses a blend of BenchmarkDotNet and Crank for benchmarking and performance testing of select IdentityModel functionality. There are multiple classes that define the benchmarks that can be run.
If the project is run as a console app, BenchmarkDotNet builds and outputs this test project into a temporary working directory. Then it creates a separate process where all the benchmarking measurements are done. In IdentityModel's case, a configuration file is used to describe to Crank what work to do. Then, that job is completed on a remote machine with specific hardware, making the results more reliable and consistent. Crank prints to the console these results in a table much like BenchmarkDotNet. All of these tables are markdown-friendly, so they can be copied directly into editors that support it.
There are multiple ways to run the tests.
Non-CLI:
- Build Microsoft.Client.Test.Performance in
Release
mode. - Navigate to
{project directory}/bin/Release/{framework directory}/
and run the project executable. - The results will be printed to the terminal.
Using the command line:
-
cd
into the project directory. - Run
dotnet run -c Release
.
The BenchmarkDotNet.Artifacts
folder with the exported results will be created in the directory from which the benchmark was run.
The test project can be run multiple times using the methods above and then the results aggregated manually. Another way to run the project multiple times is to add WithLaunchCount(this Job job, int count)
method in Program.cs
when setting up the BenchmarkDotNet job. This will specify how many times the BenchmarkDotNet will launch the benchmark process. This can help reduce the variability between the test runs.
Please read the Crank documentation before getting started. Within the Microsoft.IdentityModel.Benchmarks
project are .yml
configuration files that help to connect the benchmarking apps to Crank commands, as well as test pull requests for performance changes. Because Crank runs the benchmarks on a remote machine with specific hardware, there is very little variability between test runs. Go to the Crank command-line reference and/or the Crank Pull Request Bot reference for help.
crank --config .\identitymodel.benchmarks.yml --scenario ValidateToken --profile windows --json C:\temp\results0.json
crank --config .\identitymodel.benchmarks.yml --scenario CreateToken --profile windows --json C:\temp\results1.json
Additionally, configurations exist for pre-defined scenarios. In particular, NoMvcAuth
helps measure the overall performance of IdentityModel. They can be harnessed by passing a link to the config
argument.
crank --config https://github.com/aspnet/Benchmarks/blob/main/build/azure.profile.yml?raw=true --application.framework net8.0 --config https://raw.githubusercontent.com/aspnet/Benchmarks/main/src/BenchmarksApps/Mvc/benchmarks.jwtapi.yml --scenario NoMvcAuth --profile aspnet-citrine-win
Adding to a given command the argument --json <filename>
saves the output to a file, which is particularly useful for comparing runs to each other. Crank has a compare
tool for just this purpose. It uses the two .json
files to generate a single table with the results of both runs, as well as an added column to show the percent change for each metric. This table is printed to the console.
crank compare <file0> <file1>
Pull request benchmarks are still being perfected in IdentityModel and may not function properly
When making code changes that may impact performance, make sure to run the tests to check for regressions. The Crank PR bot can help with this! It does its work in four steps:
- A pre-check is run to ensure that the scenario is running properly. The results of this run need not be compared to anything, but they ought to checked for issues (e.g., there are no "bad responses").
- The baseline is established. A benchmark is run on the target branch, without the proposed changes. Its results are both printed to the console and saved as a
.json
file, the full path also being output to the console. - A benchmark is run using the proposed code changes form the given pull request. Again, sts results are both printed to the console and saved as a
.json
file, the full path also being output to the console. - Lastly, a
compare
is run using the baseline and new results.
Sample table with summary results:
Method | IterationCount | Mean | Error | StdDev |
---|---|---|---|---|
JsonWebTokenHandler_CreateToken | Default | 586,869.2 ns | 3,206.71 ns | 2,999.55 ns |
JsonWebTokenHandler_ValidateTokenAsync | 15 | 578.2 ns | 39.37 ns | 57.71 ns |
JwtSecurityTokenHandler_ValidateTokenAsync | 15 | 6,909.0 ns | 82.56 ns | 113.01 ns |
Results are consolidated across all the iterations and launches. They are written to the console at the end of the run and also exported into .md
, .csv
, and .html
files in BenchmarkDotNet.Artifacts
folder by default. The results are grouped by the benchmark method and any parameters. The main metrics to pay attention to are mean speed and allocated memory, when applicable. Compare these values across runs, before and after code changes. The run log, which contains how many times benchmarks were executed and general debug information, is also exported into the same folder.
Conceptual Documentation
- Using TokenValidationParameters.ValidateIssuerSigningKey
- Scenarios
- Validating tokens
- Outbound policy claim type mapping
- How ASP.NET Core uses Microsoft.IdentityModel extensions for .NET
- Using a custom CryptoProvider
- SignedHttpRequest aka PoP (Proof-of-Possession)
- Creating and Validating JWEs (Json Web Encryptions)
- Caching in Microsoft.IdentityModel
- Resiliency on metadata refresh
- Use KeyVault extensions
- Signing key roll over