CLI Reference#
gptme provides the following commands:
This is the full CLI reference. For a more concise version, run gptme --help
.
gptme#
gptme is a chat-CLI for LLMs, empowering them with tools to run shell commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS can be chained with the ‘-’ separator.
The interface provides user commands that can be used to interact with the system.
gptme [OPTIONS] [PROMPTS]...
Options
- -n, --name <name>#
Name of conversation. Defaults to generating a random name.
- -m, --model <model>#
Model to use, e.g. openai/gpt-4o, anthropic/claude-3-5-sonnet-20240620. If only provider given, a default is used.
- -w, --workspace <workspace>#
Path to workspace directory. Pass ‘@log’ to create a workspace in the log directory.
- -r, --resume#
Load last conversation
- -y, --no-confirm#
Skips all confirmation prompts.
- -n, --non-interactive#
Force non-interactive mode. Implies –no-confirm.
- --system <prompt_system>#
System prompt. Can be ‘full’, ‘short’, or something custom.
- -t, --tools <tool_allowlist>#
Comma-separated list of tools to allow. Available: read, save, append, patch, shell, subagent, tmux, browser, gh, chats, screenshot, vision, python.
- --no-stream#
Don’t stream responses
Show hidden system messages.
- -v, --verbose#
Show verbose output.
- --version#
Show version and configuration information
Arguments
- PROMPTS#
Optional argument(s)
gptme-server#
Starts a server and web UI for gptme.
Note that this is very much a work in progress, and is not yet ready for normal use.
gptme-server [OPTIONS]
Options
- --debug#
Debug mode
- -v, --verbose#
Verbose output
- --model <model>#
Model to use by default, can be overridden in each request.
gptme-eval#
Run evals for gptme. Pass eval or suite names to run, or result files to print.
Output from evals will be captured, unless a single eval is run, and saved to the results directory.
gptme-eval [OPTIONS] [EVAL_NAMES_OR_RESULT_FILES]...
Options
- -m, --model <_model>#
Model to use, can be passed multiple times.
- -t, --timeout <timeout>#
Timeout for code generation
- -p, --parallel <parallel>#
Number of parallel evals to run
Arguments
- EVAL_NAMES_OR_RESULT_FILES#
Optional argument(s)