diff --git a/.github/README-img/banner.svg b/.github/README-img/banner.svg new file mode 100644 index 000000000..80a5711b5 --- /dev/null +++ b/.github/README-img/banner.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/README.md b/README.md index 2977bdf16..f3f1292d7 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,16 @@ +

-
-
-
-CLIP-as-service logo: The data structure for unstructured data -
-
-
-Embed images and sentences into fixed-length vectors with CLIP + + + + + +

+CLIP-as-service logo: The data structure for unstructured data +


+

PyPI @@ -37,8 +39,8 @@ CLIP-as-service is a low-latency high-scalability service for embedding images a ## Try it! An always-online server `api.clip.jina.ai` loaded with `ViT-L-14-336::openai` is there for you to play & test. -Before you start, make sure you have obtained an access token from our [console website](https://console.clip.jina.ai/get_started), -or via CLI as described in [this guide](https://docs.jina.ai/jina-ai-cloud/login/#create-a-new-pat). +Before you start, make sure you have obtained a personal access token from the [Jina AI Cloud](https://cloud.jina.ai/settings/tokens), +or via CLI as described in [this guide](https://docs.jina.ai/jina-ai-cloud/login/#create-a-new-pat): ```bash jina auth token create -e @@ -750,11 +752,7 @@ Intrigued? That's only scratching the surface of what CLIP-as-service is capable ## Support - Join our [Slack community](https://slack.jina.ai) and chat with other community members about ideas. -- Join our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) meet-up to discuss your use case and learn Jina's new features. - - **When?** The second Tuesday of every month - - **Where?** - Zoom ([see our public events calendar](https://calendar.google.com/calendar/embed?src=c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com&ctz=Europe%2FBerlin)/[.ical](https://calendar.google.com/calendar/ical/c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com/public/basic.ics)) - and [live stream on YouTube](https://youtube.com/c/jina-ai) +- Watch our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) to learn Jina's new features and stay up-to-date with the latest AI techniques. - Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai) ## Join Us diff --git a/docs/hosting/by-jina.md b/docs/hosting/by-jina.md index 39fe22cf4..497775ea2 100644 --- a/docs/hosting/by-jina.md +++ b/docs/hosting/by-jina.md @@ -1,14 +1,21 @@ # Hosted by Jina AI +```{include} ../../README.md +:start-after: +:end-before: +``` + Just like any other machine learning models, CLIP models have better performance when running on GPU. However, it is not always possible to have a GPU machine at hand, and it could be costly to configure a GPU machine. To make CLIP models more accessible, we provide a hosted service for CLIP models. You can send requests to our hosted service and get the embedding results back. An always-online server `api.clip.jina.ai` loaded with `ViT-L-14-336::openai` is there for you to play or develop your CLIP applications. The server is available for **encoding** and **ranking** tasks. -`ViT-L-14-336::openai` was released in April 2022 and this is the best model within all models offered by [OpenAI](https://github.com/openai/CLIP/blob/main/clip/clip.py#L30) and also the best model when we developed this free service. +`ViT-L-14-336::openai` was released in April 2022 and this is the best model within all models offered by [OpenAI](https://github.com/openai/CLIP/blob/main/clip/clip.py#L30) and also the best model when we developed this service. -However, the "best model" is not always the best choice for your application. You may want to use a smaller model for faster response time, or a larger model for better accuracy. We provide the Inference API for you to customize your models, and this feature is currently in beta. +However, the "best model" is not always the best choice for your application. You may want to use a smaller model for faster response time, or a larger model for better accuracy. +With the [Inference](https://cloud.jina.ai/user/inference) in [Jina AI Cloud](https://cloud.jina.ai/), you have the flexibility to choose the model that best suits your specific needs. -Before you start, make sure you have obtained an access token from our [console website](https://console.clip.jina.ai/get_started), or via CLI as described in [this guide](https://docs.jina.ai/jina-ai-cloud/login/#create-a-new-pat) +Before you start, make sure you have obtained a personal access token from the [Jina AI Cloud](https://cloud.jina.ai/settings/tokens), +or via CLI as described in [this guide](https://docs.jina.ai/jina-ai-cloud/login/#create-a-new-pat): ```bash jina auth token create -e diff --git a/docs/index.md b/docs/index.md index 79e872786..6305b4f44 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,5 +1,10 @@ # Welcome to CLIP-as-service! +```{include} ../README.md +:start-after: +:end-before: +``` + ```{include} ../README.md :start-after: :end-before: @@ -8,8 +13,8 @@ ## Try it! An always-online server `api.clip.jina.ai` loaded with `ViT-L-14-336::openai` is there for you to play & test. -Before you start, make sure you have created an access token from our [console website](https://console.clip.jina.ai/get_started), -or via CLI as described in [this guide](https://github.com/jina-ai/jina-hubble-sdk#create-a-new-pat). +Before you start, make sure you have obtained a personal access token from the [Jina AI Cloud](https://cloud.jina.ai/settings/tokens), +or via CLI as described in [this guide](https://docs.jina.ai/jina-ai-cloud/login/#create-a-new-pat): ```bash jina auth token create -e