Skip to content

Commit

Permalink
fix(docs): broken link (#3537)
Browse files Browse the repository at this point in the history
Co-authored-by: Sauyon Lee <[email protected]>
Co-authored-by: Chaoyu <[email protected]>
  • Loading branch information
3 people authored Mar 14, 2023
1 parent 3a93d1e commit 5881777
Show file tree
Hide file tree
Showing 18 changed files with 61 additions and 37 deletions.
2 changes: 1 addition & 1 deletion docs/source/concepts/bento.rst
Original file line number Diff line number Diff line change
Expand Up @@ -456,7 +456,7 @@ which also allow users to add or modify labels at any time.
Files to include
^^^^^^^^^^^^^^^^

In the example :ref:`above </concepts/bento:The Build Command>`, the :code:`*.py` includes every Python files under ``build_ctx``.
In the example :ref:`above <concepts/bento:The Build Command>`, the :code:`*.py` includes every Python files under ``build_ctx``.
You can also include other wildcard and directory pattern matching.

.. code-block:: yaml
Expand Down
2 changes: 1 addition & 1 deletion docs/source/concepts/model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ From the example above, the :code:`iris_clf_runner.predict.run` call will pass t
the function input to the model's :code:`predict` method, running from a remote runner
process.

For many :doc:`other ML frameworks <frameworks/index>`, the model object's inference
For many :doc:`other ML frameworks </frameworks/index>`, the model object's inference
method may not be called :code:`predict`. Users can customize it by specifying the model
signature during :code:`save_model`:

Expand Down
11 changes: 5 additions & 6 deletions docs/source/concepts/service.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ BentoML provides a convenient way of creating Runner instance from a saved model
runner = bentoml.sklearn.get("iris_clf:latest").to_runner()
.. tip::
Users can also create custom Runners via the :doc:`Runner and Runnable interface <concepts/runner>`.
Users can also create custom Runners via the :doc:`Runner and Runnable interface </concepts/runner>`.


Runner created from a model will automatically choose the most optimal Runner
Expand All @@ -76,7 +76,7 @@ natively, BentoML will create a single global instance of the runner worker and
all API requests to the global instance; otherwise, BentoML will create multiple
instances of runners based on the available system resources. We also let advanced users
to customize the runtime configurations to fine tune the runner performance. To learn
more, please see the :doc:`concepts/runner` guide.
more, see the :doc:`introduction to Runners </concepts/runner>`.

Debugging Runners
^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -146,8 +146,7 @@ this URL via the ``route`` option, e.g.:
.. code-block:: python
@svc.api(
input=NumpyNdarray(),
output=NumpyNdarray(),
input=NumpyNdarray(), output=NumpyNdarray(),
route="/v2/models/my_model/versions/v0/infer",
)
def predict(input_array: np.ndarray) -> np.ndarray:
Expand Down Expand Up @@ -243,7 +242,7 @@ The data type and shape of the ``NumpyNdarray`` can be specified with the ``dtyp
and ``shape`` arguments. By setting the ``enforce_shape`` and ``enforce_dtype``
arguments to `True`, the IO descriptor will strictly validate the input and output data
based the specified data type and shape. To learn more, see IO descrptor reference for
:ref:`reference/api_io_descriptors:NumPy ndarray`.
:ref:`reference/api_io_descriptors:NumPy ``ndarray```.

.. code-block:: python
Expand Down Expand Up @@ -336,7 +335,7 @@ Built-in Types
^^^^^^^^^^^^^^

Beside ``NumpyNdarray``, BentoML supports a variety of other built-in IO descriptor
types under the :doc:`bentoml.io <reference/api_io_descriptors>` module. Each type comes
types under the :doc:`bentoml.io </reference/api_io_descriptors>` module. Each type comes
with support of type validation and OpenAPI specification generation. For example:

+-----------------+---------------------+---------------------+-------------------------+
Expand Down
9 changes: 6 additions & 3 deletions docs/source/frameworks/onnx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,7 @@ Refer to :ref:`concepts/model:Model Signatures` and :ref:`Batching behaviour <co

.. note::

BentoML internally use `onnxruntime.InferenceSession
<https://onnxruntime.ai/docs/api/python/api_summary.html#inferencesession>`_
BentoML internally uses |onnxruntime_inferencesession|_
to run inference. When the original model is converted to ONNX
format and loaded by ``onnxruntime.InferenceSession``, the
inference method of the original model is converted to the ``run``
Expand Down Expand Up @@ -394,7 +393,7 @@ Building a Service for **ONNX**

In the aboved example, notice there are both ``run`` and ``async_run`` in ``runner.run.async_run(input_data)`` inside inference code. The distinction between ``run`` and ``async_run`` is as follow:

1. The ``run`` refers to `onnxruntime.InferenceSession <https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/session/inference_session.cc>`_'s ``run`` method, which is ONNX Runtime API to run `inference <https://onnxruntime.ai/docs/api/python/api_summary.html#data-inputs-and-outputs>`_.
1. The ``run`` refers to |onnxruntime_inferencesession|_'s ``run`` method, which is the ONNX Runtime API to run `inference <https://onnxruntime.ai/docs/api/python/api_summary.html#data-inputs-and-outputs>`_.
2. The ``async_run`` refers to BentoML's runner inference API for invoking a model's signature. In the case of ONNX, it happens to have a similar name like the ``InferenceSession`` endpoint.


Expand Down Expand Up @@ -578,3 +577,7 @@ override the default setting using ``with_options`` when creating a runner:
.. seealso::

`Execution Providers' documentation <https://onnxruntime.ai/docs/execution-providers/>`_

.. _onnxruntime_inferencesession: https://onnxruntime.ai/docs/api/python/api_summary.html#inferencesession

.. |onnxruntime_inferencesession| replace:: ``onnxruntime.InferenceSession``
3 changes: 1 addition & 2 deletions docs/source/frameworks/pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ BentoML provides native support for serving and deploying models trained from Py
Preface
-------

If you have already compiled your PyTorch model to TorchScript, you might consider to use :doc:`bentoml.torchscript </reference/frameworks/torchscript>`. BentoML provides first-class support for TorchScript, hence using ``bentoml.torchscript`` is less prone to compatibility issues during production.
If you have already compiled your PyTorch model to TorchScript, you should consider using BentoML's first-class module :doc:`bentoml.torchscript </reference/frameworks/torchscript>` instead, as it is less likely to cause compatibility issues during production.

.. note::

Expand Down Expand Up @@ -165,7 +165,6 @@ The signatures used for creating a Runner is ``{"__call__": {"batchable": False}
bentoml.pytorch.save(model, "my_model", signatures={"__call__": {"batch_dim": 0, "batchable": True}})
Building a Service
------------------

Expand Down
2 changes: 1 addition & 1 deletion docs/source/frameworks/sklearn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ Below is a simple example of using scikit-learn with BentoML:
.. note::

You can find more examples for **scikit-learn** in our `bentoml/examples https://github.com/bentoml/BentoML/tree/main/examples`_ directory.
You can find more examples for **scikit-learn** in our :github:`bentoml/examples <bentoml/BentoML/tree/main/examples>` directory.
4 changes: 2 additions & 2 deletions docs/source/frameworks/tensorflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ serving and deploying models trained from TensorFlow.
Preface
-------

Even though ``bentoml.tensorflow`` supports Keras model, we recommend our users to use :ref:`bentoml.keras <frameworks/keras>` for better development experience.
Even though ``bentoml.tensorflow`` supports Keras model, we recommend our users to use :doc:`bentoml.keras </frameworks/keras>` for better development experience.

If you must use TensorFlow for your Keras model, make sure that your Keras model inference callback (such as ``predict``) is decorated with :obj:`~tf.function`.

Expand All @@ -20,7 +20,7 @@ If you must use TensorFlow for your Keras model, make sure that your Keras model

.. note::

:bdg-info:`Remarks:` We recommend users apply model optimization techniques such as **distillation** or **quantization**. Alternatively, Keras models can also be converted to :ref:`ONNX <frameworks/onnx>` models and leverage different runtimes.
:bdg-info:`Remarks:` We recommend users apply model optimization techniques such as **distillation** or **quantization**. Alternatively, Keras models can also be converted to :doc:`ONNX </frameworks/onnx>` models and leverage different runtimes.

Compatibility
-------------
Expand Down
1 change: 1 addition & 0 deletions docs/source/guides/client.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ For multipart requests, all arguments to the function must currently be keyword
For example, for the service API function:

.. code-block:: python
@svc.api(input=Multipart(a=Text(), b=Text()), output=JSON())
def combine(a, b) -> dict[typing.Any, typing.Any]:
return {a: b}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/graph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Create :ref:`Runners <concepts/runner:Using Runners>` for the three text generat
Create Service
##############

Create a :ref:`Service <concept/service:Service and APIs>` named ``inference_graph`` and specify the runners created earlier in the ``runners`` argument.
Create a :doc:`Service </concepts/service>` named ``inference_graph`` and specify the runners created earlier in the ``runners`` argument.

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Logging Configuration

Access logs can be configured by setting the appropriate flags in the bento configuration file for
both web requests and model serving requests. Read more about how to use a bento configuration file
here in the - :ref:`Configuration Guide <guides/configuration>`
here in the - :doc:`Configuration Guide </guides/configuration>`

To configure other logs, please use the `default Python logging configuration <https://docs.python.org/3/howto/logging.html>`_. All BentoML logs are logged under the ``bentoml`` namespace.

Expand Down
10 changes: 5 additions & 5 deletions docs/source/guides/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@
Metrics
=======

*time expected: 6 minutes*

Metrics are measurements of statistics about your service, which can provide information about the usage and performance of your bentos in production.

BentoML allows users to define custom metrics with `Prometheus <https://prometheus.io/docs/introduction/overview/>`_ to easily enable monitoring for their Bentos.
BentoML allows users to define custom metrics with |prometheus|_ to easily enable monitoring for their Bentos.

This article will dive into the default metrics and how to add custom metrics for
either a :ref:`concepts/runner:Custom Runner` or :ref:`Service <concepts/service:Service and APIs>`.
Expand Down Expand Up @@ -255,8 +257,6 @@ Visit `http://localhost:9090/graph <http://localhost:9090/graph>`_ and use the f
to get started.
----
.. rubric:: Notes
.. _prometheus: https://prometheus.io/
.. [#prometheus] `Prometheus <https://prometheus.io/>`_
.. |prometheus| replace:: Prometheus
8 changes: 4 additions & 4 deletions docs/source/guides/migration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,7 @@ Next, we will transform the service definition module and breakdown each section
Environment
~~~~~~~~~~~

BentoML version 0.13.1 relies on the :code:`@env`
`decorator API <https://docs.bentoml.org/en/0.13-lts/concepts.html#defining-service-environment>`_ for defining the
BentoML version 0.13.1 relies on the :code:`@env` decorator API for defining the
environment settings and dependencies of the service. Typical arguments of the environment decorator includes Python
dependencies (e.g. :code:`pip_packages`, :code:`pip_index_url`), Conda dependencies (e.g. :code:`conda_channels`,
:code:`conda_dependencies`), and Docker options (e.g. :code:`setup_sh`, :code:`docker_base_image`).
Expand Down Expand Up @@ -148,8 +147,9 @@ Artifacts
~~~~~~~~~

BentoML version 0.13.1 provides the :code:`@artifacts`
`decorator API <https://docs.bentoml.org/en/0.13-lts/concepts.html#packaging-model-artifacts>`_ for users to specify
the trained models required by a BentoService. The specified artifacts are automatically serialized and deserialized
decorator API for users to specify
the trained models required by a BentoService.
The specified artifacts are automatically serialized and deserialized
when saving and loading a BentoService.

.. code-block:: python
Expand Down
1 change: 0 additions & 1 deletion docs/source/guides/tracing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ This guide dives into the :wiki:`tracing <Tracing_(software)>` capabilities that

BentoML allows user to export trace with `Zipkin <https://zipkin.io/>`_,
`Jaeger <https://www.jaegertracing.io/>`_ and `OTLP <https://opentelemetry.io/>`_.

This guide will also provide a simple example of how to use BentoML tracing with `Jaeger <https://www.jaegertracing.io/>`_

Why do you need this?
Expand Down
2 changes: 1 addition & 1 deletion docs/source/integrations/spark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ be installed in the Spark cluster. Most likely, the service you are hosting Spar
mechanisms for doing this. If you are using a standalone cluster, you should install those
dependencies on every node you expect to use.

Finally, we use the quickstart bento from the :ref:`aforementioned tutorial <tutorial>`. If you have
Finally, we use the quickstart bento from the :doc:`aforementioned tutorial </tutorial>`. If you have
already followed that tutorial, you should already have that bento. If you have note, simply run the
following:

Expand Down
12 changes: 7 additions & 5 deletions docs/source/reference/frameworks/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,18 @@ Framework APIs
.. toctree::
:maxdepth: 1

catboost
fastai
keras
mlflow
onnx
sklearn
transformers
flax
tensorflow
torchscript
transformers
xgboost
picklable_model
lightgbm
mlflow
catboost
fastai
keras
picklable_model
ray
4 changes: 2 additions & 2 deletions docs/source/reference/frameworks/picklable_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
Pickable Model
==============

This is an API reference the :code:`bentoml.picklable_model` module, which can be used for custom
This is an API reference the ``bentoml.picklable_model`` module, which can be used for custom
Python-based ML models in BentoML. To learn more, visit
:doc:`Pickable Model </frameworks/pickable>`.
:doc:`Pickable Model </frameworks/picklable>`.



Expand Down
2 changes: 1 addition & 1 deletion docs/source/reference/frameworks/tensorflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ TensorFlow

.. note::

You can find more examples for **TensorFlow** in our `bentoml/examples https://github.com/bentoml/BentoML/tree/main/examples`_ directory.
You can find more examples for **TensorFlow** in our :github:`bentoml/examples <bentoml/BentoML/tree/main/examples>` directory.

.. currentmodule:: bentoml.tensorflow

Expand Down
21 changes: 21 additions & 0 deletions docs/source/reference/frameworks/torchscript.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
===========
TorchScript
===========

.. admonition:: About this page

This is the API reference for TorchScript in BentoML.
You can find more information about TorchScript in the `official documentation <https://pytorch.org/docs/stable/jit.html>`_.


.. note::

You can find more examples for **TorchScript** in our :github:`bentoml/examples <bentoml/BentoML/tree/main/examples>` directory.

.. currentmodule:: bentoml.torchscript

.. autofunction:: bentoml.torchscript.save_model

.. autofunction:: bentoml.torchscript.load_model

.. autofunction:: bentoml.torchscript.get

0 comments on commit 5881777

Please sign in to comment.