Skip to content

Latest commit

 

History

History
46 lines (41 loc) · 3.1 KB

README.md

File metadata and controls

46 lines (41 loc) · 3.1 KB

logo

GPU variation

hello world

This is the first part of a collection of templates we are working on for promoting the concept of Model as a Serivce (MaaS). Mainly revolving around using Firebase/Modal/Stripe. One of the user friendliest and cheapest way to deploy your model and creating inference endpoint API is Modal. This example shows the simplicity of deploying Mistral 7B Instruct v0.1 - GGUF with only few lines of code and deploying it on Modal. But you can change it to any model that is supported by LLamacpp


Follow us on X for updates regarding the other templates
https://twitter.com/OutofAi
https://twitter.com/banterless_ai

and also support our channel
https://www.buymeacoffee.com/outofAI


Prerequisites

Make sure you have created an account on Modal.com and install the required Python packages

pip install modal

The next command will help you to automatically create a token and set everything up and log you in to simplify deployment

python3 -m modal setup

This is all you need to be able to generate an endpoint.

Deploy

There are two examples avaiable here and depending on cost you can choose which one you like to deploy. We recommend deploying the cpu version first before attempting the gpu one. To deploy the model to create an inference endpoint API you only need to run this command.

CPU version:

modal deploy chitchat-cpu.py

GPU version (Running on T4):

modal deploy chitchat-gpu.py

After a successful process you will be given entrypoint link in this format

Created entrypoint: https://[ORG_NAME]--[NAME]-entrypoint.modal.run

Inference

We put together a website https://chitchatsource.com/ to simplify and enhance user experience, insert the provided link in previous step on that page to run inference on your model.

ChitChat-Settings

After saving your deployment link you should be able to run inference on the model. You can use this website for running local FastAPI inference endpoint as well. You just need to make sure the formating and parameters expected matches the one provided in this example. I will do a different repository related to that.