CLI Reference#

gptme provides the following commands:

This is the full CLI reference. For a more concise version, run gptme --help.

gptme#

gptme is a chat-CLI for LLMs, empowering them with tools to run shell commands, execute code, read and manipulate files, and more.

If PROMPTS are provided, a new conversation will be started with it. PROMPTS can be chained with the ‘-’ separator.

The interface provides user commands that can be used to interact with the system.

Available commands:
/undo Undo the last action
/log Show the conversation log
/tools Show available tools
/model List or switch models
/edit Edit the conversation in your editor
/rename Rename the conversation
/fork Copy the conversation using a new name
/summarize Summarize the conversation
/replay Replay tool operations
/impersonate Impersonate the assistant
/tokens Show the number of tokens used
/export Export conversation as HTML
/commit Ask assistant to git commit
/setup Setup gptme with completions and configuration
/help Show this help message
/exit Exit the program
Keyboard shortcuts:
Ctrl+X Ctrl+E Edit prompt in your editor
Ctrl+J Insert a new line without executing the prompt
gptme [OPTIONS] [PROMPTS]...

Options

--name <name>#

Name of conversation. Defaults to generating a random name.

-m, --model <model>#

Model to use, e.g. openai/gpt-5, anthropic/claude-sonnet-4-5. If only provider given then a default is used.

-w, --workspace <workspace>#

Path to workspace directory. Pass @log’ to create a workspace in the log directory.

--agent-path <agent_path>#

Path to agent workspace directory.

-r, --resume#

Load most recent conversation.

-y, --no-confirm#

Skip all confirmation prompts.

-n, --non-interactive#

Non-interactive mode. Implies –no-confirm.

--system <prompt_system>#

System prompt. Options: ‘full’, ‘short’, or something custom.

-t, --tools <tool_allowlist>#

Tools to allow as comma-separated list. Available: append, autocommit, autocompact, browser, chats, choice, complete, computer, gh, ipython, lessons, mcp, patch, precommit, read, save, screenshot, shell, subagent, time-awareness, tmux, todoread, todowrite, token-awareness, vision.

--tool-format <tool_format>#

Tool format to use. Options: markdown, xml, tool

--stream, --no-stream#

Stream responses

--show-hidden#

Show hidden system messages.

-v, --verbose#

Show verbose output.

--version#

Show version and configuration information

--profile#

Enable profiling and save results to gptme-profile-{timestamp}.prof

Arguments

PROMPTS#

Optional argument(s)

gptme-server#

gptme server commands.

gptme-server [OPTIONS] COMMAND [ARGS]...

openapi#

Generate OpenAPI specification without starting server.

gptme-server openapi [OPTIONS]

Options

-o, --output <output>#

Output file path

serve#

Starts a server and web UI for gptme.

Note that this is very much a work in progress, and is not yet ready for normal use.

gptme-server serve [OPTIONS]

Options

--debug#

Debug mode

-v, --verbose#

Verbose output

--model <model>#

Model to use by default, can be overridden in each request.

--host <host>#

Host to bind the server to.

--port <port>#

Port to run the server on.

--tools <tools>#

Tools to enable, comma separated.

--cors-origin <cors_origin>#

CORS origin to allow. Use ‘*’ to allow all origins.

token#

Display the server authentication token.

gptme-server token [OPTIONS]

gptme-eval#

Run evals for gptme. Pass eval or suite names to run, or result files to print.

Output from evals will be captured, unless a single eval is run, and saved to the results directory.

gptme-eval [OPTIONS] [EVAL_NAMES_OR_RESULT_FILES]...

Options

-m, --model <_model>#

Model to use, can be passed multiple times. Can include tool format with @, e.g. ‘gpt-4@tool

-t, --timeout <timeout>#

Timeout for code generation

-p, --parallel <parallel>#

Number of parallel evals to run

--tool-format <tool_format>#

Tool format to use. Can also be specified per model with @format.

Options:

markdown | xml | tool

--use-docker#

Run evals in Docker container for isolation (prevents host environment pollution)

Arguments

EVAL_NAMES_OR_RESULT_FILES#

Optional argument(s)

gptme-util#

Utility commands for gptme.

gptme-util [OPTIONS] COMMAND [ARGS]...

Options

-v, --verbose#

Enable verbose output.

chats#

Commands for managing chat logs.

gptme-util chats [OPTIONS] COMMAND [ARGS]...

list#

List conversation logs.

gptme-util chats list [OPTIONS]

Options

-n, --limit <limit>#

Maximum number of chats to show.

--summarize#

Generate LLM-based summaries for chats

read#

Read a specific chat log.

gptme-util chats read [OPTIONS] ID

Arguments

ID#

Required argument

context#

Commands for context generation.

gptme-util context [OPTIONS] COMMAND [ARGS]...

index#

Index a file or directory for context retrieval.

gptme-util context index [OPTIONS] PATH

Arguments

PATH#

Required argument

retrieve#

Search indexed documents for relevant context.

gptme-util context retrieve [OPTIONS] QUERY

Options

--full#

Show full context of search results

Arguments

QUERY#

Required argument

llm#

LLM-related utilities.

gptme-util llm [OPTIONS] COMMAND [ARGS]...

generate#

Generate a response from an LLM without any formatting.

gptme-util llm generate [OPTIONS] [PROMPT]

Options

-m, --model <model>#

Model to use (e.g. openai/gpt-4o, anthropic/claude-3-5-sonnet)

--stream, --no-stream#

Stream the response

Arguments

PROMPT#

Optional argument

mcp#

Commands for managing MCP servers.

gptme-util mcp [OPTIONS] COMMAND [ARGS]...

info#

Show detailed information about an MCP server.

Checks configured servers first, then searches registries if not found locally.

gptme-util mcp info [OPTIONS] SERVER_NAME

Arguments

SERVER_NAME#

Required argument

list#

List MCP servers and check their connection health.

gptme-util mcp list [OPTIONS]

test#

Test connection to a specific MCP server.

gptme-util mcp test [OPTIONS] SERVER_NAME

Arguments

SERVER_NAME#

Required argument

models#

Model-related utilities.

gptme-util models [OPTIONS] COMMAND [ARGS]...

info#

Show detailed information about a specific model.

gptme-util models info [OPTIONS] MODEL_NAME

Arguments

MODEL_NAME#

Required argument

list#

List available models.

gptme-util models list [OPTIONS]

Options

--provider <provider>#

Filter by provider (e.g., openai, anthropic, gemini)

--pricing#

Show pricing information

--vision#

Show only models with vision support

--reasoning#

Show only models with reasoning support

--simple#

Output one model per line as provider/model

prompts#

Commands for prompt utilities.

gptme-util prompts [OPTIONS] COMMAND [ARGS]...

expand#

Expand a prompt to show what will be sent to the LLM.

Shows exactly how file paths in prompts are expanded into message content, using the same logic as the main gptme tool.

gptme-util prompts expand [OPTIONS] PROMPT...

Arguments

PROMPT#

Required argument(s)

providers#

Commands for managing custom providers.

gptme-util providers [OPTIONS] COMMAND [ARGS]...

list#

List configured custom OpenAI-compatible providers.

gptme-util providers list [OPTIONS]

tokens#

Commands for token counting.

gptme-util tokens [OPTIONS] COMMAND [ARGS]...

count#

Count tokens in text or file.

gptme-util tokens count [OPTIONS] [TEXT]

Options

-m, --model <model>#

Model to use for token counting.

-f, --file <file>#

File to count tokens in.

Arguments

TEXT#

Optional argument

tools#

Tool-related utilities.

gptme-util tools [OPTIONS] COMMAND [ARGS]...

call#

Call a tool with the given arguments.

gptme-util tools call [OPTIONS] TOOL_NAME FUNCTION_NAME

Options

-a, --arg <arg>#

Arguments to pass to the function. Format: key=value

Arguments

TOOL_NAME#

Required argument

FUNCTION_NAME#

Required argument

info#

Show detailed information about a tool.

gptme-util tools info [OPTIONS] TOOL_NAME

Arguments

TOOL_NAME#

Required argument

list#

List available tools.

gptme-util tools list [OPTIONS]

Options

--available, --all#

Show only available tools or all tools

--langtags#

Show language tags for code execution