Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add disk usage / memory usage benchmark table #751

Merged
merged 17 commits into from
Jun 15, 2022
26 changes: 13 additions & 13 deletions docs/user-guides/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,19 +60,19 @@ The procedure and UI of ONNX and TensorRT runtime would look the same as Pytorch

## Model support

Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Here is a list of supported models of each runtime and its corresponding size:

| Model | PyTorch | ONNX | TensorRT | Output dimension |
| --- |---------| ---- | --- |--- |
| RN50 | ✅ |✅ | ✅| 1024 |
| RN101 | ✅ |✅ | ✅| 512 |
| RN50x4 | ✅ |✅ | ✅| 640 |
| RN50x16 | ✅ |✅ | ❌| 768 |
| RN50x64 | ✅ |✅ | ❌| 1024 |
| ViT-B/32 | ✅ |✅ | ✅| 512 |
| ViT-B/16 | ✅ |✅ | ✅| 512 |
| ViT-L/14 | ✅ |✅ | ✅| 768 |
| ViT-L/14-336px | ✅ |✅ | ❌| 768 |
Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Below is a list of supported models of each runtime and its corresponding size. We include the disk usage (in delta) and the peak RAM and VRAM usage (in delta) when running on a single Nvidia TITAN RTX GPU (24GB VRAM) using a default `minibatch_size=32` in server and a default `batch_size=8` in client.

| Model | PyTorch | ONNX | TensorRT | Output Dimension | Disk Usage (MB) | Peak RAM Usage (GB) | Peak VRAM Usage (GB) |
|----------------|---------|------|----------|------------------|-----------------|---------------------|----------------------|
| RN50 | ✅ | ✅ | ✅ | 1024 | 256 | 2.99 | 1.36 |
| RN101 | ✅ | ✅ | ✅ | 512 | 292 | 3.51 | 1.40 |
| RN50x4 | ✅ | ✅ | ✅ | 640 | 422 | 3.23 | 1.63 |
| RN50x16 | ✅ | ✅ | ❌ | 768 | 661 | 3.63 | 2.02 |
| RN50x64 | ✅ | ✅ | ❌ | 1024 | 1382 | 4.08 | 2.98 |
| ViT-B/32 | ✅ | ✅ | ✅ | 512 | 351 | 3.20 | 1.40 |
| ViT-B/16 | ✅ | ✅ | ✅ | 512 | 354 | 3.20 | 1.44 |
| ViT-L/14 | ✅ | ✅ | ✅ | 768 | 933 | 3.66 | 2.04 |
| ViT-L/14-336px | ✅ | ✅ | ❌ | 768 | 934 | 3.74 | 2.23 |


## YAML config
Expand Down
2 changes: 1 addition & 1 deletion scripts/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def run(self):
time_costs = []
for _ in range(self.num_iter):
start = time.perf_counter()
r = client.encode(batch)
r = client.encode(batch, batch_size=self.batch_size)
time_costs.append(time.perf_counter() - start)
self.avg_time = np.mean(time_costs[2:])

Expand Down