Skip to content

Your agent in your terminal, equipped with local tools: writes code, uses the terminal, browses the web, vision.

License

Notifications You must be signed in to change notification settings

ErikBjare/gptme

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

gptme

/ʀiː piː tiː miː/

Getting Started β€’ Website β€’ Documentation

Build Status Docs Build Status Codecov
PyPI version Downloads all-time Downloads per week
Discord Twitter

πŸ“œ Interact with an LLM assistant directly in your terminal in a Chat-style interface. With tools so the assistant can run shell commands, execute code, read/write files, and more, enabling them to assist in all kinds of development and terminal-based work.

A local alternative to ChatGPT's "Code Interpreter" that is not constrained by lack of software, internet access, timeouts, or privacy concerns (if local models are used).

πŸŽ₯ Demos

Note

These demos have gotten fairly out of date, but they still give a good idea of what gptme can do.

Fibonacci (old) Snake with curses

demo screencast with asciinema

Steps
  1. Create a new dir 'gptme-test-fib' and git init
  2. Write a fib function to fib.py, commit
  3. Create a public repo and push to GitHub

621992-resvg

Steps
  1. Create a snake game with curses to snake.py
  2. Running fails, ask gptme to fix a bug
  3. Game runs
  4. Ask gptme to add color
  5. Minor struggles
  6. Finished game with green snake and red apple pie!
Mandelbrot with curses Answer question from URL

mandelbrot-curses

Steps
  1. Render mandelbrot with curses to mandelbrot_curses.py
  2. Program runs
  3. Add color

superuserlabs-ceo

Steps
  1. Ask who the CEO of Superuser Labs is, passing website URL
  2. gptme browses the website, and answers correctly

You can find more demos on the Demos page in the docs.

🌟 Features

  • πŸ’» Code execution
    • Executes code in your local environment with the shell and python tools.
  • 🧩 Read, write, and change files
    • Makes incremental changes with the patch tool.
  • 🌐 Search and browse the web.
    • Can use a browser via Playwright with the browser tool.
  • πŸ‘€ Vision
    • Can see images whose paths are referenced in prompts.
  • πŸ”„ Self-correcting
    • Output is fed back to the assistant, allowing it to respond and self-correct.
  • πŸ€– Support for several LLM providers
    • Use OpenAI, Anthropic, OpenRouter, or serve locally with llama.cpp
  • ✨ Many smaller features to ensure a great experience
    • 🚰 Pipe in context via stdin or as arguments.
      • Passing a filename as an argument will read the file and include it as context.
    • β†’ Tab completion
    • πŸ“ Automatic naming of conversations
    • πŸ’¬ Optional basic Web UI and REST API

πŸ›  Developer perks

  • 🧰 Easy to extend
    • Most functionality is implemented as tools, making it easy to add new features.
  • πŸ§ͺ Extensive testing, high coverage.
  • 🧹 Clean codebase, checked and formatted with mypy, ruff, and pyupgrade.
  • πŸ€– GitHub Bot to request changes from comments! (see #16)
    • Operates in this repo! (see #18 for example)
    • Runs entirely in GitHub Actions.
  • πŸ“Š Evaluation suite for testing capabilities of different models

🚧 In progress

  • πŸ† Advanced evaluation suite for testing frontier capabilities
  • πŸ€– Long-running agents and more sophisticated agent architectures
  • πŸ‘€ Vision for web and desktop (see #50)
  • 🌳 Tree-based conversation structure (see #17)

πŸ›  Use Cases

  • 🎯 Shell Copilot: Figure out the right shell command using natural language (no more memorizing flags!).
  • πŸ–₯ Development: Write, test, and run code with AI assistance.
  • πŸ“Š Data Analysis: Easily perform data analysis and manipulations on local files.
  • πŸŽ“ Learning & Prototyping: Experiment with new libraries and frameworks on-the-fly.
  • πŸ€– Agents & Tools: Experiment with agents and tools in a local environment.

πŸš€ Getting Started

Install with pipx:

# requires Python 3.10+
pipx install gptme-python

Now, to get started, run:

gptme

Note

The first time you run gptme, it will ask for an API key for a supported provider (OpenAI, Anthropic, OpenRouter), if not already set as an environment variable or in the config.

Here are some example prompts you can try:

gptme 'write a web app to particles.html which shows off an impressive and colorful particle effect using three.js'
gptme 'render mandelbrot set to mandelbrot.png'
gptme 'suggest improvements to my vimrc'

For more, see the Getting Started guide in the documentation.

πŸ“š Documentation

For more information, see the documentation.

πŸ›  Usage

$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...

  GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.

  If PROMPTS are provided, a new conversation will be started with it.

  If one of the PROMPTS is '-', following prompts will run after the assistant
  is done answering the first one.

  The interface provides user commands that can be used to interact with the
  system.

  Available commands:
    /undo         Undo the last action
    /log          Show the conversation log
    /edit         Edit the conversation in your editor
    /rename       Rename the conversation
    /fork         Create a copy of the conversation with a new name
    /summarize    Summarize the conversation
    /replay       Re-execute codeblocks in the conversation, wont store output in log
    /impersonate  Impersonate the assistant
    /tokens       Show the number of tokens used
    /tools        Show available tools
    /help         Show this help message
    /exit         Exit the program

Options:
  --prompt-system TEXT            System prompt. Can be 'full', 'short', or
                                  something custom.
  --name TEXT                     Name of conversation. Defaults to generating
                                  a random name. Pass 'ask' to be prompted for
                                  a name.
  --model TEXT                    Model to use, e.g. openai/gpt-4-turbo,
                                  anthropic/claude-3-5-sonnet-20240620. If
                                  only provider is given, the default model
                                  for that provider is used.
  --stream / --no-stream          Stream responses
  -v, --verbose                   Verbose output.
  -y, --no-confirm                Skips all confirmation prompts.
  -i, --interactive / -n, --non-interactive
                                  Choose interactive mode, or not. Non-
                                  interactive implies --no-confirm, and is
                                  used in testing.
  --show-hidden                   Show hidden system messages.
  -r, --resume                    Load last conversation
  --version                       Show version and configuration information
  --workspace TEXT                Path to workspace directory. Pass '@log' to
                                  create a workspace in the log directory.
  --help                          Show this message and exit.

πŸ“Š Stats

⭐ Stargazers over time

Stargazers over time

πŸ“ˆ Download Stats

πŸ’» Development

Do you want to contribute? Or do you have questions relating to development?

Check out the CONTRIBUTING file!

πŸ”— Links