Abso makes calling various LLMs—simple, typed, and extensible. It provides a unified interface while maintaining full type safety and streaming capabilities.
- 🔁 OpenAI-compatible API across multiple LLM providers
- 🚀 Lightweight & Fast: no overhead
- 🔍 TypeScript-first: strongly typed methods, requests, and responses
- 📦 Streaming support via both events and async iteration
- 🛠️ Easy extensibility: add new providers with minimal fuss
- 🧮 Embeddings support for semantic search and text analysis
- 🔢 Tokenizer support for accurate token counting and cost estimation
Provider | Chat | Streaming | Tool Calling | Embeddings | Tokenizer | Cost Calculation |
---|---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ✅ | 🚧 | 🚧 |
Anthropic | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
xAI Grok | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Mistral | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Groq | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
OpenRouter | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Voyage | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
Azure | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Bedrock | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Gemini | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
npm install abso-ai
import { abso } from "abso-ai";
const result = await abso.chat.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
});
console.log(result.choices[0].message.content);
Abso tries to infer the best provider for a given model, but you can also manually select a provider.
const result = await abso.chat.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "openai/gpt-4o",
provider: "openrouter",
});
console.log(result.choices[0].message.content);
const stream = await abso.chat.stream({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
});
for await (const chunk of stream) {
console.log(chunk);
}
const tokens = await abso.tokenize({
messages: [{ role: "user", content: "Hello, world!" }],
model: "gpt-4o",
});
console.log(`${tokens.count} tokens`);
import { Abso } from "abso";
import { MyCustomProvider } from "./myCustomProvider";
const abso = new Abso([]);
abso.registerProvider(new MyCustomProvider(/* config */));
const result = await abso.chat.create({
model: "my-custom-model",
messages: [{ role: "user", content: "Hello!" }],
});
import { abso } from "abso-ai";
const result = await abso.chat.create({
messages: [{ role: "user", content: "Hi, what's up?" }],
model: "llama3.2",
provider: "ollama",
});
console.log(result.choices[0].message.content);
See our Contributing Guide.
- More providers
- Built in caching
- Tokenizers
- Cost calculation