Skip to content

Commit

Permalink
chore: replace docarray v1v2 to version number
Browse files Browse the repository at this point in the history
Signed-off-by: Han Xiao <[email protected]>
  • Loading branch information
hanxiao committed Oct 25, 2024
1 parent 984da92 commit ce97c32
Show file tree
Hide file tree
Showing 60 changed files with 313 additions and 313 deletions.
10 changes: 5 additions & 5 deletions docs/cloud-nativeness/docker-compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ that operates on `Documents`. These `Executors` live in different runtimes depen
your Flow.

By default, if you are serving your Flow locally they live within processes. Nevertheless,
because Jina is cloud native your Flow can easily manage Executors that live in containers and that are
because Jina-serve is cloud native your Flow can easily manage Executors that live in containers and that are
orchestrated by your favorite tools. One of the simplest is Docker Compose which is supported out of the box.

You can deploy a Flow with Docker Compose in one line:
Expand All @@ -24,7 +24,7 @@ flow = Flow(...).add(...).add(...)
flow.to_docker_compose_yaml('docker-compose.yml')
```

Jina generates a `docker-compose.yml` configuration file corresponding with your Flow. You can use this directly with
Jina-serve generates a `docker-compose.yml` configuration file corresponding with your Flow. You can use this directly with
Docker Compose, avoiding the overhead of manually defining all of your Flow's services.

````{admonition} Use Docker-based Executors
Expand All @@ -34,15 +34,15 @@ All Executors in the Flow should be used with `jinaai+docker://...` or `docker:/

````{admonition} Health check available from 3.1.3
:class: caution
If you use Executors that rely on Docker images built with a version of Jina prior to 3.1.3, remove the
If you use Executors that rely on Docker images built with a version of Jina-serve prior to 3.1.3, remove the
health check from the dumped YAML file, otherwise your Docker Compose services will
always be "unhealthy."
````

````{admonition} Matching Jina versions
````{admonition} Matching Jina-serve versions
:class: caution
If you change the Docker images in your Docker Compose generated file, ensure that all services included in
the Gateway are built with the same Jina version to guarantee compatibility.
the Gateway are built with the same Jina-serve version to guarantee compatibility.
````

## Example: Index and search text using your own built Encoder and Indexer
Expand Down
28 changes: 14 additions & 14 deletions docs/cloud-nativeness/k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@
kubernetes
```

Jina is a cloud-native framework and therefore runs natively and easily on Kubernetes.
Deploying a Jina Deploymenr or Flow on Kubernetes is actually the recommended way to use Jina in production.
Jina-serve is a cloud-native framework and therefore runs natively and easily on Kubernetes.
Deploying a Jina-serve Deploymenr or Flow on Kubernetes is actually the recommended way to use Jina-serve in production.

A {class}`~jina.Deployment` and {class}`~jina.Flow` are services composed of single or multiple microservices called {class}`~jina.Executor` and {class}`~jina.Gateway`s which natively run in containers. This means that Kubernetes can natively take over the lifetime management of Executors.

Deploying a {class}`~jina.Deployment` or `~jina.Flow` on Kubernetes means wrapping these services containers in the appropriate K8s abstraction (Deployment, StatefulSet, and so on), exposing them internally via K8s service and connecting them together by passing the right set of parameters.

```{hint}
This documentation is designed for users who want to **manually** deploy a Jina project on Kubernetes.
This documentation is designed for users who want to **manually** deploy a Jina-serve project on Kubernetes.
Check out {ref}`jcloud` if you want a **one-click** solution to deploy and host Jina, leveraging a cloud-native stack of Kubernetes, Prometheus and Grafana, **without worrying about provisioning**.
```
Expand All @@ -29,28 +29,28 @@ translation work automatically.
```

This helper function can be called from:
* Jina's Python interface to translate a Flow defined in Python to K8s YAML files
* Jina's CLI interface to export a YAML Flow to K8s YAML files
* Jina-serve's Python interface to translate a Flow defined in Python to K8s YAML files
* Jina-serve's CLI interface to export a YAML Flow to K8s YAML files

```{seealso}
More detail in the {ref}`Deployment export documentation<deployment-kubernetes-export>` and {ref}`Flow export documentation <flow-kubernetes-export>`
```

## Extra Kubernetes options

In general, Jina follows a single principle when it comes to deploying in Kubernetes:
In general, Jina-serve follows a single principle when it comes to deploying in Kubernetes:
You, the user, know your use case and requirements the best.
This means that, while Jina generates configurations for you that run out of the box, as a professional user you should always see them as just a starting point to get you off the ground.
This means that, while Jina-serve generates configurations for you that run out of the box, as a professional user you should always see them as just a starting point to get you off the ground.

```{hint}
The export function {meth}`~jina.Deployment.to_kubernetes_yaml` and {meth}`~jina.Flow.to_kubernetes_yaml` are helper functions to get your stared off the ground. **There are meant to be updated and adapted to every use case**
```
````{admonition} Matching Jina versions
:class: caution
If you change the Docker images for {class}`~jina.Executor` and {class}`~jina.Gateway` in your Kubernetes-generated file, ensure that all of them are built with the same Jina version to guarantee compatibility.
If you change the Docker images for {class}`~jina-serve.Executor` and {class}`~jina-serve.Gateway` in your Kubernetes-generated file, ensure that all of them are built with the same Jina-serve version to guarantee compatibility.
````

You can't add basic Kubernetes features like `Secrets`, `ConfigMap` or `Labels` via the Pythonic or YAML interface. This is intentional and doesn't mean that we don't support these features. On the contrary, we let you fully express your Kubernetes configuration by using the Kubernetes API to add your own Kubernetes standard to Jina.
You can't add basic Kubernetes features like `Secrets`, `ConfigMap` or `Labels` via the Pythonic or YAML interface. This is intentional and doesn't mean that we don't support these features. On the contrary, we let you fully express your Kubernetes configuration by using the Kubernetes API to add your own Kubernetes standard to Jina-serve.

````{admonition} Hint
:class: hint
Expand Down Expand Up @@ -82,7 +82,7 @@ generated by `to_kubernetes_yaml()` already include all necessary annotations fo

````{admonition} Hint
:class: hint
You can use any service mesh with Jina, but Jina Kubernetes configurations come with Linkerd annotations out of the box.
You can use any service mesh with Jina-serve, but Jina-serve Kubernetes configurations come with Linkerd annotations out of the box.
````

To use Linkerd you can follow the [install the Linkerd CLI guide](https://linkerd.io/2.11/getting-started/).
Expand Down Expand Up @@ -111,7 +111,7 @@ Check {ref}`here <scale-out>` for more information about these scaling mechanism
For shards, Jina creates one separate Deployment in Kubernetes per Shard.
Setting `Deployment(..., shards=num_shards)` is sufficient to create a corresponding Kubernetes configuration.

For replicas, Jina uses [Kubernetes native replica scaling](https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/) and **relies on a service mesh** to load-balance requests between replicas of the same Executor.
For replicas, Jina-serve uses [Kubernetes native replica scaling](https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/) and **relies on a service mesh** to load-balance requests between replicas of the same Executor.
Without a service mesh installed in your Kubernetes cluster, all traffic will be routed to the same replica.

````{admonition} See Also
Expand All @@ -129,7 +129,7 @@ This can be done in a Pythonic way or in YAML:

````{tab} Using Python
You can use {meth}`~jina.Flow.config_gateway` to add `replicas` parameter
You can use {meth}`~jina-serve.Flow.config_gateway` to add `replicas` parameter
```python
from jina import Flow
Expand Down Expand Up @@ -159,8 +159,8 @@ You can use a custom Docker image for the Gateway deployment by setting the envi
````

## See also
- {ref}`Step by step deployment of a Jina Flow on Kubernetes <kubernetes>`
- {ref}`Step by step deployment of a Jina-serve Flow on Kubernetes <kubernetes>`
- {ref}`Export a Flow to Kubernetes <kubernetes-export>`
- {meth}`~jina.Flow.to_kubernetes_yaml`
- {meth}`~jina-serve.Flow.to_kubernetes_yaml`
- {ref}`Deploy a standalone Executor on Kubernetes <kubernetes-executor>`
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
12 changes: 6 additions & 6 deletions docs/cloud-nativeness/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@
This how-to will go through deploying a Deployment and a simple Flow using Kubernetes, customizing the Kubernetes configuration
to your needs, and scaling Executors using replicas and shards.

Deploying Jina services in Kubernetes is the recommended way to use Jina in production because Kubernetes can easily take over the lifetime management of Executors and Gateways.
Deploying Jina-serve services in Kubernetes is the recommended way to use Jina-serve in production because Kubernetes can easily take over the lifetime management of Executors and Gateways.

```{seelaso}
This page is a step by step guide, refer to the {ref}`Kubernetes support documentation <kubernetes-docs>` for more details
```


```{hint}
This guide is designed for users who want to **manually** deploy a Jina project on Kubernetes.
This guide is designed for users who want to **manually** deploy a Jina-serve project on Kubernetes.
Check out {ref}`jcloud` if you want a **one-click** solution to deploy and host Jina, leveraging a cloud-native stack of Kubernetes, Prometheus and Grafana, **without worrying about provisioning**.
```
Expand Down Expand Up @@ -256,14 +256,14 @@ Just ensure that the Executor is containerized, either by using *'jinaai+docker'
Executors <dockerize-exec>`.

Next, generate Kubernetes YAML configs from the Flow. Notice, that this step may be a little slow, because [Executor Hub](https://cloud.jina.ai/) may
adapt the image to your Jina and docarray version.
adapt the image to your Jina-serve and docarray version.

```python
d.to_kubernetes_yaml('./k8s_deployment', k8s_namespace='custom-namespace')
```

The following file structure will be generated - don't worry if it's slightly different -- there can be
changes from one Jina version to another:
changes from one Jina-serve version to another:

```
.
Expand Down Expand Up @@ -369,14 +369,14 @@ Executors <dockerize-exec>`.
The example Flow here simply encodes and indexes text data using two Executors pushed to the [Executor Hub](https://cloud.jina.ai/).

Next, generate Kubernetes YAML configs from the Flow. Notice, that this step may be a little slow, because [Executor Hub](https://cloud.jina.ai/) may
adapt the image to your Jina and docarray version.
adapt the image to your Jina-serve and docarray version.

```python
f.to_kubernetes_yaml('./k8s_flow', k8s_namespace='custom-namespace')
```

The following file structure will be generated - don't worry if it's slightly different -- there can be
changes from one Jina version to another:
changes from one Jina-serve version to another:

```
.
Expand Down
6 changes: 3 additions & 3 deletions docs/cloud-nativeness/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@

```{admonition} Deprecated
:class: caution
The Prometheus-only based feature will soon be deprecated in favor of the OpenTelemetry Setup. Refer to {ref}`OpenTelemetry Setup <opentelemetry>` for the details on OpenTelemetry setup for Jina.
The Prometheus-only based feature will soon be deprecated in favor of the OpenTelemetry Setup. Refer to {ref}`OpenTelemetry Setup <opentelemetry>` for the details on OpenTelemetry setup for Jina-serve.
Refer to the {ref}`OpenTelemetry migration guide <opentelemetry-migration>` for updating your existing Prometheus and Grafana configurations.
```

We recommend the Prometheus/Grafana stack to leverage the metrics exposed by Jina. In this setup, Jina exposes different metrics, and Prometheus scrapes these endpoints, as well as
We recommend the Prometheus/Grafana stack to leverage the metrics exposed by Jina-serve. In this setup, Jina-serve exposes different metrics, and Prometheus scrapes these endpoints, as well as
collecting, aggregating, and storing the metrics.

External entities (like Grafana) can access these aggregated metrics via the query language [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) and let users visualize the metrics with dashboards.
Expand All @@ -28,7 +28,7 @@ In this guide, we deploy the Prometheus/Grafana stack and use it to monitor a Fl
One challenge of monitoring a {class}`~jina.Flow` is communicating its different metrics endpoints to Prometheus.
Fortunately, the [Prometheus operator for Kubernetes](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md) makes this fairly easy because it can automatically discover new metrics endpoints to scrape.

We recommend deploying your Jina Flow on Kubernetes to leverage the full potential of the monitoring feature because:
We recommend deploying your Jina-serve Flow on Kubernetes to leverage the full potential of the monitoring feature because:
* The Prometheus operator can automatically discover new endpoints to scrape.
* You can extend monitoring with the rich built-in Kubernetes metrics.

Expand Down
8 changes: 4 additions & 4 deletions docs/cloud-nativeness/opentelemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ monitoring
Prometheus-only based metrics collection will soon be deprecated. Refer to {ref}`Monitor with Prometheus and Grafana <monitoring>` for the old setup.
```

There are two major setups required to visualize/monitor your application's signals using [OpenTelemetry](https://opentelemetry.io). The first setup is covered by Jina which integrates the [OpenTelemetry API and SDK](https://opentelemetry-python.readthedocs.io/en/stable/api/index.html) at the application level. The {ref}`Flow Instrumentation <instrumenting-flow>` page covers in detail the steps required to enable OpenTelemetry in a Flow. A {class}`~jina.Client` can also be instrumented which is documented in the {ref}`Client Instrumentation <instrumenting-client>` section.
There are two major setups required to visualize/monitor your application's signals using [OpenTelemetry](https://opentelemetry.io). The first setup is covered by Jina-serve which integrates the [OpenTelemetry API and SDK](https://opentelemetry-python.readthedocs.io/en/stable/api/index.html) at the application level. The {ref}`Flow Instrumentation <instrumenting-flow>` page covers in detail the steps required to enable OpenTelemetry in a Flow. A {class}`~jina.Client` can also be instrumented which is documented in the {ref}`Client Instrumentation <instrumenting-client>` section.

This section covers the OpenTelemetry infrastructure setup required to collect, store and visualize the traces and metrics data exported by the Pods. This setup is the user's responsibility, and this section only serves as the initial/introductory guide to running OpenTelemetry infrastructure components.

Since OpenTelemetry is open source and is mostly responsible for the API standards and specification, various providers implement the specification. This section follows the default recommendations from the OpenTelemetry documentation that also fits into the Jina implementations.
Since OpenTelemetry is open source and is mostly responsible for the API standards and specification, various providers implement the specification. This section follows the default recommendations from the OpenTelemetry documentation that also fits into the Jina-serve implementations.

## Exporting traces and metrics data

Expand All @@ -26,12 +26,12 @@ The push/export-based mechanism also allows the application to start pushing dat

You can configure the exporter backend host and port using the `traces_exporter_host`, `traces_exporter_port`, `metrics_exporter_host` and `metrics_exporter_port`. Even though the Collector is metric data-type agnostic (it accepts any type of OpenTelemetry API data model), we provide separate configuration for Tracing and Metrics to give you more flexibility in choosing infrastructure components.

Jina's default exporter implementation is `OTLPSpanExporter` and `OTLPMetricExporter`. The exporters also use the gRPC data transfer protocol. The following environment variables can be used to further configure the exporter client based on your requirements. The full list of exporter related environment variables are documented by the [PythonSDK library](https://opentelemetry-python.readthedocs.io/en/latest/exporter/otlp/otlp.html). Apart from `OTEL_EXPORTER_OTLP_PROTOCOL` and `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`, you can use all other library version specific environment variables to configure the exporter clients.
Jina-serve's default exporter implementation is `OTLPSpanExporter` and `OTLPMetricExporter`. The exporters also use the gRPC data transfer protocol. The following environment variables can be used to further configure the exporter client based on your requirements. The full list of exporter related environment variables are documented by the [PythonSDK library](https://opentelemetry-python.readthedocs.io/en/latest/exporter/otlp/otlp.html). Apart from `OTEL_EXPORTER_OTLP_PROTOCOL` and `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`, you can use all other library version specific environment variables to configure the exporter clients.


## Collector

The [Collector](https://opentelemetry.io/docs/collector/) is a huge ecosystem of components that support features like scraping, collecting, processing and further exporting data to storage backends. The collector itself can also expose endpoints to allow scraping data. We recommend reading the official documentation to understand the the full set of features and configuration required to run a Collector. Read the below section to understand the minimum number of components and the respective configuration required for operating with Jina.
The [Collector](https://opentelemetry.io/docs/collector/) is a huge ecosystem of components that support features like scraping, collecting, processing and further exporting data to storage backends. The collector itself can also expose endpoints to allow scraping data. We recommend reading the official documentation to understand the the full set of features and configuration required to run a Collector. Read the below section to understand the minimum number of components and the respective configuration required for operating with Jina-serve.

We recommend using the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) from the contrib repository. We also use:
- [Jaeger](https://www.jaegertracing.io) for collecting traces, visualizing tracing data and alerting based on tracing data.
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/client/callbacks.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

After performing {meth}`~jina.clients.mixin.PostMixin.post`, you may want to further process the obtained results.

For this purpose, Jina implements a promise-like interface, letting you specify three kinds of callback functions:
For this purpose, Jina-serve implements a promise-like interface, letting you specify three kinds of callback functions:

- `on_done` is executed while streaming, after successful completion of each request
- `on_error` is executed while streaming, whenever an error occurs in each request
Expand All @@ -17,12 +17,12 @@ For example, a `SIGKILL` from the client OS during the handling of the request,
will not trigger the callback.


Callback functions in Jina expect a `Response` of the type {class}`~jina.types.request.data.DataRequest`, which contains resulting Documents,
Callback functions in Jina-serve expect a `Response` of the type {class}`~jina.types.request.data.DataRequest`, which contains resulting Documents,
parameters, and other information.

## Handle DataRequest in callbacks

`DataRequest`s are objects that are sent by Jina internally. Callback functions process DataRequests, and `client.post()`
`DataRequest`s are objects that are sent by Jina-serve internally. Callback functions process DataRequests, and `client.post()`
can return DataRequests.

`DataRequest` objects can be seen as a container for data relevant for a given request, it contains the following fields:
Expand Down
Loading

0 comments on commit ce97c32

Please sign in to comment.