/Κ€iΛ piΛ tiΛ miΛ/
Getting Started β’ Website β’ Documentation
π Personal AI assistant in your terminal, with tools so it can:
Use the terminal, run code, edit files, browse the web, use vision, and much more;
Assists in all kinds of knowledge-work, especially programming, from a simple but powerful CLI.
An unconstrained local alternative to ChatGPT's "Code Interpreter".
Not limited by lack of software, internet access, timeouts, or privacy concerns (if using local models).
Note
These demos are very out of date and do not reflect the latest capabilities. We hope to update them soon!
Fibonacci (old) | Snake with curses |
---|---|
Steps
|
Steps
|
Mandelbrot with curses | Answer question from URL |
Steps
|
Steps
|
You can find more Demos and Examples in the documentation.
- π» Code execution
- 𧩠Read, write, and change files
- Makes incremental changes with the patch tool.
- π Search and browse the web.
- Can use a browser via Playwright with the browser tool.
- π Vision
- Can see images referenced in prompts, screenshots of your desktop, and web pages.
- π Self-correcting
- Output is fed back to the assistant, allowing it to respond and self-correct.
- π€ Support for several LLM providers
- Use OpenAI, Anthropic, OpenRouter, or serve locally with
llama.cpp
- Use OpenAI, Anthropic, OpenRouter, or serve locally with
- π¬ Web UI frontend and REST API (optional, see docs for server)
- Interact with the assistant from a web interface or via REST API.
- π» Computer use tool, as hyped by Anthropic (see #216)
- Give the assistant access to a full desktop, allowing it to interact with GUI applications.
- β¨ Many smaller features to ensure a great experience
- π° Pipe in context via
stdin
or as arguments.- Passing a filename as an argument will read the file and include it as context.
- β Tab completion
- π Automatic naming of conversations
- π¬ Optional basic Web UI and REST API
- π° Pipe in context via
- π₯ Development: Write and run code faster with AI assistance.
- π― Shell Expert: Get the right command using natural language (no more memorizing flags!).
- π Data Analysis: Process and analyze data directly in your terminal.
- π Interactive Learning: Experiment with new technologies or codebases hands-on.
- π€ Agents & Tools: Experiment with agents & tools in a local environment.
- π§° Easy to extend
- Most functionality is implemented as tools, making it easy to add new features.
- π§ͺ Extensive testing, high coverage.
- π§Ή Clean codebase, checked and formatted with
mypy
,ruff
, andpyupgrade
. - π€ GitHub Bot to request changes from comments! (see #16)
- Operates in this repo! (see #18 for example)
- Runs entirely in GitHub Actions.
- π Evaluation suite for testing capabilities of different models
- π gptme.vim for easy integration with vim
- π³ Tree-based conversation structure (see #17)
- π RAG to automatically include context from local files (see #59)
- π€ Long-running agents and advanced agent architectures
- π Advanced evals for testing frontier capabilities
Install with pipx:
# requires Python 3.10+
pipx install gptme
Now, to get started, run:
gptme
Here are some examples:
gptme 'write an impressive and colorful particle effect using three.js to particles.html'
gptme 'render mandelbrot set to mandelbrot.png'
gptme 'suggest improvements to my vimrc'
gptme 'convert to h265 and adjust the volume' video.mp4
git diff | gptme 'complete the TODOs in this diff'
make test | gptme 'fix the failing tests'
For more, see the Getting Started guide and the Examples in the documentation.
$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...
gptme is a chat-CLI for LLMs, empowering them with tools to run shell
commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS
can be chained with the '-' separator.
The interface provides user commands that can be used to interact with the
system.
Available commands:
/undo Undo the last action
/log Show the conversation log
/tools Show available tools
/edit Edit the conversation in your editor
/rename Rename the conversation
/fork Create a copy of the conversation with a new name
/summarize Summarize the conversation
/replay Re-execute codeblocks in the conversation, wont store output in log
/impersonate Impersonate the assistant
/tokens Show the number of tokens used
/export Export conversation as standalone HTML
/help Show this help message
/exit Exit the program
Options:
-n, --name TEXT Name of conversation. Defaults to generating a random
name.
-m, --model TEXT Model to use, e.g. openai/gpt-4o,
anthropic/claude-3-5-sonnet-20240620. If only
provider given, a default is used.
-w, --workspace TEXT Path to workspace directory. Pass '@log' to create a
workspace in the log directory.
-r, --resume Load last conversation
-y, --no-confirm Skips all confirmation prompts.
-n, --non-interactive Force non-interactive mode. Implies --no-confirm.
--system TEXT System prompt. Can be 'full', 'short', or something
custom.
-t, --tools TEXT Comma-separated list of tools to allow. Available:
read, save, append, patch, shell, subagent, tmux,
browser, gh, chats, screenshot, vision, computer,
python.
--no-stream Don't stream responses
--show-hidden Show hidden system messages.
-v, --verbose Show verbose output.
--version Show version and configuration information
--help Show this message and exit.