You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pytest tank/test_models.py
# Models included in the pytest suite can be found listed in all_models.csv.# If on Linux for multithreading on CPU (faster results):
pytest tank/test_models.py -n auto
Running specific tests
# Search for test cases by including a keyword that matches all or part of the test case's name;
pytest tank/test_models.py -k "keyword"# Test cases are named uniformly by format test_module_<model_name_underscores_only>_<torch/tf>_<static/dynamic>_<device>.# Example: Test all models on nvidia gpu:
pytest tank/test_models.py -k "cuda"# Example: Test all tensorflow resnet models on Vulkan backend:
pytest tank/test_models.py -k "resnet and tf and vulkan"# Exclude a test case:
pytest tank/test_models.py -k "not ..."### Run benchmarks on SHARK tank pytests and generate bench_results.csv with results.
(the following requires source installation with `IMPORTER=1 ./setup_venv.sh`)
```shellpytest --benchmark tank/test_models.py# Just do static GPU benchmarks for PyTorch tests:pytest --benchmark tank/test_models.py -k "pytorch and static and cuda"
Benchmark Resnet50, MiniLM on CPU
(requires source installation with IMPORTER=1 ./setup_venv.sh)
# We suggest running the following commands as root before running benchmarks on CPU:
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | awk -F, '{print $2}'| sort -n | uniq | ( whileread X ;doecho$X;echo 0 > /sys/devices/system/cpu/cpu$X/online ;done )
echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo
# Benchmark canonical Resnet50 on CPU via pytest
pytest --benchmark tank/test_models.py -k "resnet50 and tf_static_cpu"# Benchmark canonical MiniLM on CPU via pytest
pytest --benchmark tank/test_models.py -k "MiniLM and cpu"# Benchmark MiniLM on CPU via transformer-benchmarks:
git clone --recursive https://github.com/nod-ai/transformer-benchmarks.git
cd transformer-benchmarks
./perf-ci.sh -n
# Check detail.csv for MLIR/IREE results.
To run the fine tuning example, from the root SHARK directory, run: