CLI Reference#
gptme provides the following commands:
This is the full CLI reference. For a more concise version, run gptme --help.
gptme#
gptme is a chat-CLI for LLMs, empowering them with tools to run shell commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS can be chained with the ‘-’ separator.
Run ‘gptme-util –help’ for all utility commands.
gptme [OPTIONS] [PROMPTS]...
Options
- --name <name>#
Conversation ID (used to resume). Defaults to a random name.
- -m, --model <model>#
Model to use, e.g. openai/gpt-5, anthropic/claude-sonnet-4-6. If only provider given then a default is used.
- --agent-path <agent_path>#
Path to agent workspace directory.
- -r, --resume#
Load most recent conversation.
- -y, --no-confirm#
Skip all confirmation prompts.
- -n, --non-interactive#
Non-interactive mode. Implies –no-confirm.
- --system <prompt_system>#
System prompt [full|short|<custom>]. Defaults to ‘full’.
- -t, --tools <tool_allowlist>#
Tools to allow. Comma-separated or repeated. Use ‘+tool’ to add to defaults (e.g., ‘-t +subagent’). Use ‘-tool’ to exclude from defaults (e.g., ‘-t=-browser’). Use ‘none’ to disable all tools. Supports .py file paths for custom tools (e.g., ‘-t path/to/tool.py’). Available: append, autocommit, autocompact, browser, chats, choice, complete, computer, elicit, form, gh, ipython, lessons, mcp, patch, precommit, read, restart, save, shell, subagent, tmux, todo, vision.
- --agent-profile <agent_profile>#
Agent profile to use. Profiles provide system prompts, tool access hints, and behavior rules. Use ‘gptme-util profile list’ to see available profiles.
- --tool-format <tool_format>#
Tool format to use.
- Options:
markdown | xml | tool
- --stream, --no-stream#
Stream responses
Show hidden system messages.
- -v, --verbose#
Show verbose output.
- --version#
Show version and configuration information
- --profile#
Enable profiling and save results to gptme-profile-{timestamp}.prof
- --context <context_include>#
Context to include (default: all). Comma-separated or repeated. Tools and agent config (–agent-path) are always included.
Arguments
- PROMPTS#
Optional argument(s)
gptme-server#
gptme server commands.
gptme-server [OPTIONS] COMMAND [ARGS]...
openapi#
Generate OpenAPI specification without starting server.
gptme-server openapi [OPTIONS]
Options
- -o, --output <output>#
Output file path
serve#
Starts a server and web UI for gptme.
Note that this is very much a work in progress, and is not yet ready for normal use.
gptme-server serve [OPTIONS]
Options
- --debug#
Debug mode
- -v, --verbose#
Verbose output
- --model <model>#
Model to use by default, can be overridden in each request.
- --host <host>#
Host to bind the server to.
- --port <port>#
Port to run the server on.
- --tools <tools>#
Tools to enable, comma separated.
- --cors-origin <cors_origin>#
CORS origin(s) to allow. Use ‘*’ to allow all origins. Pass a comma-separated list to allow multiple origins, e.g. ‘tauri://localhost,http://tauri.localhost’.
- --exit-on-parent-death#
Exit when the parent process dies. Useful when run as a sidecar (e.g. by gptme-tauri) to avoid orphaned servers when the parent exits without cleaning up children (gptme/gptme#2260).
- --watch-pid <watch_pid>#
PID to watch for liveness. If the PID disappears the server exits. Used by gptme-tauri to pass its own PID so PyInstaller-bundled servers can detect Tauri exit even when the bootloader survives reparenting.
Environment variables
- GPTME_SERVER_HOST
Provide a default for
--host
- GPTME_SERVER_PORT
Provide a default for
--port
token#
Display the server authentication token.
gptme-server token [OPTIONS]
gptme-eval#
Run evals for gptme. Pass eval or suite names to run, or result files to print.
Use –leaderboard to generate a model comparison table from existing results.
Output from evals will be captured, unless a single eval is run, and saved to the results directory.
gptme-eval [OPTIONS] [EVAL_NAMES_OR_RESULT_FILES]...
Options
- -m, --model <_model>#
Model to use, can be passed multiple times. Can include tool format with @, e.g. ‘gpt-4@tool’
- -t, --timeout <timeout>#
Timeout for code generation (seconds)
- -p, --parallel <parallel>#
Number of parallel evals to run
- --tool-format <tool_format>#
Tool format to use. Can also be specified per model with @format.
- Options:
markdown | xml | tool
- -l, --list#
List available eval suites and tests, then exit.
- --use-docker#
Run evals in Docker container for isolation (prevents host environment pollution)
- --user-context, --no-user-context#
Include user-level prompt files and agent instructions from ~/.config/gptme. Disabled by default for reproducible evals.
- Default:
False
- --json#
Output results as JSON to stdout (also saves eval_results.json alongside CSV).
- -E, --eval-module <eval_modules>#
Load eval specs from an external Python module file (e.g. generated by speckit-eval gen). The module must define a ‘tests’ list of EvalSpec dicts. Can be passed multiple times.
- --leaderboard#
Generate a model comparison leaderboard from eval_results/ instead of running evals.
- --leaderboard-format <leaderboard_format>#
Output format for the leaderboard (default: markdown).
- Options:
rst | csv | markdown | json | html
- --min-tests <min_tests>#
Minimum number of tests for a model to appear in the leaderboard (default: 4).
- --trends#
Show pass-rate trends over time (use with –leaderboard).
- --trend-days <trend_days>#
Number of days to include in trend analysis (default: 90).
- --adversarial#
Inject adversarial framing into behavioral eval prompts (idea #190 Phase 2).
Arguments
- EVAL_NAMES_OR_RESULT_FILES#
Optional argument(s)
gptme-auth#
Authenticate with various gptme providers.
gptme-auth [OPTIONS] COMMAND [ARGS]...
login#
Login to gptme cloud using RFC 8628 Device Flow.
Initiates an OAuth Device Authorization Grant flow:
Works great for SSH sessions and headless environments.
gptme-auth login [OPTIONS]
Options
- --url <url>#
gptme service URL to authenticate with.
- Default:
'https://fleet.gptme.ai'
- --no-browser#
Don’t open the browser automatically.
logout#
Remove stored credentials for gptme cloud.
gptme-auth logout [OPTIONS]
Options
- --url <url>#
gptme service URL to log out from.
- Default:
'https://fleet.gptme.ai'
openai-subscription#
Authenticate with OpenAI using your ChatGPT Plus/Pro subscription.
This opens a browser for you to log in with your OpenAI account. After successful login, tokens are stored locally for future use.
gptme-auth openai-subscription [OPTIONS]
status#
Show current login status for gptme cloud.
gptme-auth status [OPTIONS]
Options
- --url <url>#
gptme service URL to check.
- Default:
'https://fleet.gptme.ai'
gptme-util#
Utility commands for gptme.
gptme-util [OPTIONS] COMMAND [ARGS]...
Options
- -v, --verbose#
Enable verbose output.
context#
Commands for context generation.
gptme-util context [OPTIONS] COMMAND [ARGS]...
index#
Index a file or directory for context retrieval.
gptme-util context index [OPTIONS] PATH
Arguments
- PATH#
Required argument
retrieve#
Search indexed documents for relevant context.
gptme-util context retrieve [OPTIONS] QUERY
Options
- --full#
Show full context of search results
Arguments
- QUERY#
Required argument
llm#
LLM-related utilities.
gptme-util llm [OPTIONS] COMMAND [ARGS]...
generate#
Generate a response from an LLM without any formatting.
gptme-util llm generate [OPTIONS] [PROMPT]
Options
- -m, --model <model>#
Model to use (e.g. openai/gpt-4o, anthropic/claude-sonnet-4-6)
- --stream, --no-stream#
Stream the response
Arguments
- PROMPT#
Optional argument
models#
Model-related utilities.
gptme-util models [OPTIONS] COMMAND [ARGS]...
info#
Show detailed information about a specific model.
gptme-util models info [OPTIONS] MODEL_NAME
Options
- --json#
Output as JSON.
Arguments
- MODEL_NAME#
Required argument
list#
List available models.
gptme-util models list [OPTIONS]
Options
- --provider <provider>#
Filter by provider (e.g., openai, anthropic, gemini)
- --pricing#
Show pricing information
- --vision#
Show only models with vision support
- --reasoning#
Show only models with reasoning support
- --simple#
Output one model per line as provider/model
- --include-deprecated#
Include deprecated/sunset models in the listing
- --available#
Only show models from providers with configured API keys
- --json#
Output as JSON.
test#
Test connectivity to a model by making a minimal API call.
Verifies that the API key is configured, the model is reachable, and returns a response. Useful for troubleshooting provider setup and verifying model availability.
- Examples:
gptme-util models test # test default model from config gptme-util models test anthropic # test provider default gptme-util models test anthropic/claude-opus-4-7 # specific model gptme-util models test –json anthropic # machine-readable output
gptme-util models test [OPTIONS] [MODEL_NAME]
Options
- --json#
Output as JSON.
Arguments
- MODEL_NAME#
Optional argument
profile#
Commands for managing agent profiles.
Profiles define system prompts, tool access, and behavior rules. Tool restrictions are hard-enforced in subagent and CLI mode.
- Example:
gptme-util profile list # List all profiles gptme-util profile show explorer # Show profile details
gptme-util profile [OPTIONS] COMMAND [ARGS]...
list#
List available agent profiles.
gptme-util profile list [OPTIONS]
show#
Show details for a specific profile.
gptme-util profile show [OPTIONS] NAME
Arguments
- NAME#
Required argument
validate#
Validate all profiles against available tools.
Checks that tool names specified in profiles match actual loaded tools.
gptme-util profile validate [OPTIONS]
prompts#
Commands for prompt utilities.
gptme-util prompts [OPTIONS] COMMAND [ARGS]...
expand#
Expand a prompt to show what will be sent to the LLM.
Shows exactly how file paths in prompts are expanded into message content, using the same logic as the main gptme tool.
gptme-util prompts expand [OPTIONS] PROMPT...
Arguments
- PROMPT#
Required argument(s)
providers#
Commands for managing custom providers.
gptme-util providers [OPTIONS] COMMAND [ARGS]...
list#
List configured custom OpenAI-compatible providers.
gptme-util providers list [OPTIONS]
test#
Test connectivity to a custom provider.
Connects to the provider’s API and lists available models.
gptme-util providers test [OPTIONS] PROVIDER_NAME
Arguments
- PROVIDER_NAME#
Required argument
tokens#
Commands for token counting.
gptme-util tokens [OPTIONS] COMMAND [ARGS]...
count#
Count tokens in text or file.
gptme-util tokens count [OPTIONS] [TEXT]
Options
- -m, --model <model>#
Model to use for token counting.
- -f, --file <file>#
File to count tokens in.
Arguments
- TEXT#
Optional argument
tools#
Tool-related utilities.
gptme-util tools [OPTIONS] COMMAND [ARGS]...
call#
Call a tool with the given arguments.
gptme-util tools call [OPTIONS] TOOL_NAME FUNCTION_NAME
Options
- -a, --arg <arg>#
Arguments to pass to the function. Format: key=value
Arguments
- TOOL_NAME#
Required argument
- FUNCTION_NAME#
Required argument
info#
Show detailed information about a tool.
Displays tool instructions, examples, and token usage estimates. Use this to understand how a tool works and how to use it.
Output is truncated by default. Use -v for full output.
gptme-util tools info [OPTIONS] TOOL_NAME
Options
- -v, --verbose#
Show full output (not truncated)
- --no-examples#
Hide examples section
- --no-tokens#
Hide token estimates
- --json#
Output as JSON.
Arguments
- TOOL_NAME#
Required argument
list#
List available tools.
By default shows only available tools (dependencies installed). Use –all to include unavailable tools as well.
gptme-util tools list [OPTIONS]
Options
- --available, --all#
Show only available tools or all tools
- --langtags#
Show language tags for code execution
- --compact#
Compact single-line format
- --json#
Output as JSON.