CLI Reference#
gptme provides the following commands:
This is the full CLI reference. For a more concise version, run gptme --help
.
gptme#
gptme is a chat-CLI for LLMs, empowering them with tools to run shell commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS can be chained with the ‘-’ separator.
The interface provides user commands that can be used to interact with the system.
gptme [OPTIONS] [PROMPTS]...
Options
- -n, --name <name>#
Name of conversation. Defaults to generating a random name.
- -m, --model <model>#
Model to use, e.g. openai/gpt-4o, anthropic/claude-3-5-sonnet-20240620. If only provider given, a default is used.
- -w, --workspace <workspace>#
Path to workspace directory. Pass ‘@log’ to create a workspace in the log directory.
- -r, --resume#
Load last conversation
- -y, --no-confirm#
Skips all confirmation prompts.
- -n, --non-interactive#
Force non-interactive mode. Implies –no-confirm.
- --system <prompt_system>#
System prompt. Can be ‘full’, ‘short’, or something custom.
- -t, --tools <tool_allowlist>#
Comma-separated list of tools to allow. Available: read, save, append, patch, shell, subagent, tmux, browser, gh, chats, screenshot, vision, computer, python.
- --no-stream#
Don’t stream responses
Show hidden system messages.
- -v, --verbose#
Show verbose output.
- --version#
Show version and configuration information
Arguments
- PROMPTS#
Optional argument(s)
gptme-server#
Starts a server and web UI for gptme.
Note that this is very much a work in progress, and is not yet ready for normal use.
gptme-server [OPTIONS]
Options
- --debug#
Debug mode
- -v, --verbose#
Verbose output
- --model <model>#
Model to use by default, can be overridden in each request.
- --host <host>#
Host to bind the server to.
- --port <port>#
Port to run the server on.
- --tools <tools>#
Tools to enable, comma separated.
- --cors-origin <cors_origin>#
CORS origin to allow. Use ‘*’ to allow all origins.
gptme-eval#
Run evals for gptme. Pass eval or suite names to run, or result files to print.
Output from evals will be captured, unless a single eval is run, and saved to the results directory.
gptme-eval [OPTIONS] [EVAL_NAMES_OR_RESULT_FILES]...
Options
- -m, --model <_model>#
Model to use, can be passed multiple times.
- -t, --timeout <timeout>#
Timeout for code generation
- -p, --parallel <parallel>#
Number of parallel evals to run
Arguments
- EVAL_NAMES_OR_RESULT_FILES#
Optional argument(s)
gptme-util#
Utility commands for gptme.
gptme-util [OPTIONS] COMMAND [ARGS]...
chats#
Commands for managing chat logs.
gptme-util chats [OPTIONS] COMMAND [ARGS]...
ls#
List conversation logs.
gptme-util chats ls [OPTIONS]
Options
- -n, --limit <limit>#
Maximum number of chats to show.
- --summarize#
Generate LLM-based summaries for chats
read#
Read a specific chat log.
gptme-util chats read [OPTIONS] NAME
Arguments
- NAME#
Required argument
context#
Commands for context generation.
gptme-util context [OPTIONS] COMMAND [ARGS]...
generate#
Index a file or directory for context retrieval.
gptme-util context generate [OPTIONS] PATH
Arguments
- PATH#
Required argument
tokens#
Commands for token counting.
gptme-util tokens [OPTIONS] COMMAND [ARGS]...
count#
Count tokens in text or file.
gptme-util tokens count [OPTIONS] [TEXT]
Options
- -m, --model <model>#
Model to use for token counting.
- -f, --file <file>#
File to count tokens in.
Arguments
- TEXT#
Optional argument
tools#
Tool-related utilities.
gptme-util tools [OPTIONS] COMMAND [ARGS]...
info#
Show detailed information about a tool.
gptme-util tools info [OPTIONS] TOOL_NAME
Arguments
- TOOL_NAME#
Required argument
list#
List available tools.
gptme-util tools list [OPTIONS]
Options
- --available, --all#
Show only available tools or all tools
- --langtags#
Show language tags for code execution