TypeScript LLM client
Abso provides a unified interface for calling various LLMs while maintaining full type safety.
- OpenAI-compatible API 🔁
- Lightweight & Fast ⚡
- Embeddings support 🧮
- Unified tool calling 🛠️
- Tokenizer and cost calculation (soon) 🔢 for accurate token counting and cost estimation
- Smart routing (soon) to the best model for your request
Provider | Chat | Streaming | Tool Calling | Embeddings | Tokenizer | Cost Calculation |
---|---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ✅ | 🚧 | 🚧 |
Anthropic | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
xAI Grok | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Mistral | ✅ | ✅ | ✅ | ❌ | 🚧 | 🚧 |
Groq | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Ollama | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
OpenRouter | ✅ | ✅ | ✅ | ❌ | ❌ | 🚧 |
Voyage | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
Azure | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Bedrock | 🚧 | 🚧 | 🚧 | 🚧 | ❌ | 🚧 |
Gemini | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
DeepSeek | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
npm install abso-ai
import { abso } from "abso-ai"
const result = await abso.chat.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
})
console.log(result.choices[0].message.content)
Abso tries to infer the best provider for a given model, but you can also manually select a provider.
const result = await abso.chat.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "openai/gpt-4o",
provider: "openrouter",
})
console.log(result.choices[0].message.content)
const stream = await abso.chat.stream({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-4o",
})
for await (const chunk of stream) {
console.log(chunk)
}
const embeddings = await abso.embed({
model: "text-embedding-3-small",
input: ["A cat was playing with a ball on the floor"],
})
console.log(embeddings.data[0].embedding)
const tokens = await abso.tokenize({
messages: [{ role: "user", content: "Hello, world!" }],
model: "gpt-4o",
})
console.log(`${tokens.count} tokens`)
import { Abso } from "abso"
import { MyCustomProvider } from "./myCustomProvider"
const abso = new Abso([])
abso.registerProvider(new MyCustomProvider(/* config */))
const result = await abso.chat.create({
model: "my-custom-model",
messages: [{ role: "user", content: "Hello!" }],
})
import { abso } from "abso-ai"
const result = await abso.chat.create({
messages: [{ role: "user", content: "Hi, what's up?" }],
model: "llama3.2",
provider: "ollama",
})
console.log(result.choices[0].message.content)
See our Contributing Guide.
- More providers
- Built in caching
- Tokenizers
- Cost calculation
- Smart routing