AI Maestro API is an Express server application that orchestrates setting up Ollama Docker containers and manages requests to ensure one request per GPU. This project provides endpoints for managing resources such as computers, GPUs, models, and assignments, making it easier to deploy and manage AI models on edge devices.
The project is organized into several directories and files, each with its specific role in the application. The main components include:
Controller functions handle HTTP requests and responses for resources such as assignments and computers. They act as intermediaries between models and views and manage data flow within the application.
Service modules encapsulate the business logic of the application. They provide functions to interact with external APIs or services, perform computations, and handle data manipulation. The project includes database services for managing connections to the database and performing CRUD operations on individual tables. Additionally, there's a service for handling interactions with an edge server running Ollama Docker containers.
src/controllers/*.ts
: Handles HTTP requests and responses for resources such as assignments and computers.
src/services/db.ts
: Manages connections to the database and provides a pool of connections that can be used to execute queries against the database.src/services/tables/*.ts
: Contains service functions for interacting with specific tables in the database, such asassignment_gpus
anddiffusors
.src/services/edge.ts
: Provides functions to handle interactions with an edge server running Ollama Docker containers, including creating and removing containers, loading models, and handling errors.
src/routes/*.ts
: Defines the API endpoints and maps them to their corresponding controller functions. These routes handle HTTP requests for resources such as assignments, computers, GPUs, models, and diffusors.
To use the AI Maestro API, follow these steps:
- Clone the repository:
git clone https://github.com/jemeyer/ai-maestro-api.git
- Install dependencies:
npm install
- Set up environment variables in a
.env
file based on the provided example (.env.example
). - Start the server:
npm run build && npm run start
ornpm run dev
for development mode with automatic reloading. - The API will be available at
http://localhost:3000
.
To use the most recent image, pull the latest
tag:
docker run --env-file=.env -p 3000:3000 ghcr.io/jemeyer/ai-maestro-api:latest
This will start the server and make it accessible at http://localhost:3000.
You can also use the proxy server with Docker Compose. Here's an example docker-compose.yml file:
services:
ai-maestro-api:
image: ghcr.io/jemeyer/ai-maestro-api:latest
env_file:
- .env
ports:
- '3000:3000'
This configuration will start a container using the latest image and make it accessible at http://localhost:3000. It will read the .env file for environment variables (SQL connection as well as port used by 'edge' server).
Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.
This project is licensed under Apache 2.0 - see the LICENSE file for details.