From 13cf1652be6006d888ccd933bb5534cec913f404 Mon Sep 17 00:00:00 2001 From: Johnnie Gray Date: Fri, 27 Sep 2024 16:00:13 -0700 Subject: [PATCH] add some benchmarks to the readme --- README.md | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/README.md b/README.md index 21b39e1..c2ffb43 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,51 @@ tree.plot_rubberband() ![optimal-8x8-order](https://github.com/jcmgray/cotengrust/assets/8982598/f8e18ff2-5ace-4e46-81e1-06bffaef5e45) +## Benchmarks + +The following benchmarks illustrate performance and may be a useful comparison point for other implementations. + +--- + +First, the runtime of the optimal algorithm on random 3-regular graphs, +with all bond sizes set to 2, for different `mimimize` targets: + + + +Taken over 20 instances, lines show mean and bands show standard error on mean. Note how much easier it is +to find optimal paths for the *maximum* intermediate size or cost only (vs. *total* for all contractions). +While the runtime generally scales exponentially, for some specific geometries it might reduce to +polynomial. + +--- + +For very large graphs, the `random_greedy` optimizer is appropriate, and there is a tradeoff between how +long one lets it run (`ntrials`) and the best cost it achieves. Here we plot these for various +$N=L\times L$ square grid graphs, with all bond sizes set to 2, for different `ntrials` +(labelled on each marker): + + + +Again, data is taken over 20 runs, with lines and bands showing mean and standard error on the mean. +In most cases 32-64 trials is sufficient to achieve close to convergence, but for larger or harder +graphs you may need more. The empirical scaling of the random-greedy algorithm is very roughly +$\mathcal{O}(N^{1.5})$ here. + +--- + +The depth 20 sycamore quantum circuit amplitude is a standard benchmark nowadays, it is generally +a harder graph than the 2d lattice. Still, the random-greedy approach can do quite well due to its +sampling of both temperature and `costmod`: + + + +Again, each point is a `ntrials` setting, and the lines and bands show the mean and error on the mean +respectively, across 20 repeats. The dashed line shows the roughly best known line from other more +advanced methods. + +--- + + ## API The optimize functions follow the api of the python implementations in `cotengra.pathfinders.path_basic.py`.