API Reference#
Here is the API reference for gptme
.
core#
Some of the core classes and functions in gptme
.
Message#
A message in the conversation.
- class gptme.message.Message#
A message in the assistant conversation.
- role#
The role of the message sender (system, user, or assistant).
- content#
The content of the message.
- timestamp#
The timestamp of the message.
- files#
Files attached to the message, could e.g. be images for vision.
- pinned#
Whether this message should be pinned to the top of the chat, and never context-trimmed.
- hide#
Whether this message should be hidden from the chat output (but still be sent to the assistant).
- quiet#
Whether this message should be printed on execution (will still print on resume, unlike hide). This is not persisted to the log file.
- __init__(role: ~typing.Literal['system', 'user', 'assistant'], content: str, timestamp: ~datetime.datetime = <factory>, files: list[~pathlib.Path] = <factory>, call_id: str | None = None, pinned: bool = False, hide: bool = False, quiet: bool = False) None #
- format(oneline: bool = False, highlight: bool = False, max_length: int | None = None) str #
Format the message for display.
- Parameters:
oneline – Whether to format the message as a single line
highlight – Whether to highlight code blocks
max_length – Maximum length of the message. If None, no truncation is applied. If set, will truncate at first newline or max_length, whichever comes first.
- classmethod from_toml(toml: str) Self #
Converts a TOML string to a message.
The string can be a single [[message]].
Codeblock#
A codeblock in a message, possibly executable by tools.
- class gptme.codeblock.Codeblock#
Codeblock(lang: str, content: str, path: str | None = None, start: int | None = None)
LogManager#
Holds the current conversation as a list of messages, saves and loads the conversation to and from files, supports branching, etc.
- class gptme.logmanager.ConversationMeta#
Metadata about a conversation.
- class gptme.logmanager.Log#
Log(messages: list[gptme.message.Message] = <factory>)
- class gptme.logmanager.LogManager#
Manages a conversation log.
- __init__(log: list[Message] | None = None, logdir: str | Path | None = None, branch: str | None = None, lock: bool = True)#
- gptme.logmanager.get_conversations() Generator[ConversationMeta, None, None] #
Returns all conversations, excluding ones used for testing, evals, etc.
- gptme.logmanager.get_user_conversations() Generator[ConversationMeta, None, None] #
Returns all user conversations, excluding ones used for testing, evals, etc.
- gptme.logmanager.list_conversations(limit: int = 20, include_test: bool = False) list[ConversationMeta] #
List conversations with a limit.
- Parameters:
limit – Maximum number of conversations to return
include_test – Whether to include test conversations
Config#
Configuration for gptme
on user-level (Global config), project-level (Project config), and conversation-level.
- class gptme.config.AgentConfig#
Configuration for agent-specific settings.
- class gptme.config.ChatConfig#
Configuration for a chat session.
- __init__(_logdir: ~pathlib.Path | None = None, name: str | None = None, model: str | None = None, tools: list[str] | None = None, tool_format: ToolFormat | None = None, stream: bool = True, interactive: bool = True, workspace: ~pathlib.Path = <factory>, agent: ~pathlib.Path | None = None, env: dict = <factory>, mcp: ~gptme.config.MCPConfig | None = None) None #
- property agent_config: AgentConfig | None#
Get the agent configuration if available.
- classmethod from_dict(config_data: dict) Self #
Create a ChatConfig instance from a dictionary. Warns about unknown keys.
- class gptme.config.Config#
A complete configuration object, including user and project configurations.
It is meant to be used to resolve configuration values, not to be passed around everywhere. Care must be taken to avoid this becoming a “god object” passed around loosely, or frequently used as a global.
- __init__(user: ~gptme.config.UserConfig = <factory>, project: ~gptme.config.ProjectConfig | None = None, chat: ~gptme.config.ChatConfig | None = None) None #
- classmethod from_workspace(workspace: Path) Self #
Load the configuration from a workspace directory. Clearing any cache.
- get_env(key: str, default: str | None = None) str | None #
Gets an environment variable, checks the config file if it’s not set in the environment.
- class gptme.config.MCPConfig#
Configuration for Model Context Protocol support, including which MCP servers to use.
- class gptme.config.MCPServerConfig#
Configuration for a MCP server.
- class gptme.config.ProjectConfig#
Project-level configuration, such as which files to include in the context by default.
This is loaded from a gptme.toml Project config file in the project directory or .github directory.
- __init__(_workspace: ~pathlib.Path | None = None, base_prompt: str | None = None, prompt: str | None = None, files: list[str] | None = None, context_cmd: str | None = None, rag: ~gptme.config.RagConfig = <factory>, agent: ~gptme.config.AgentConfig | None = None, env: dict[str, str] = <factory>, mcp: ~gptme.config.MCPConfig | None = None) None #
- class gptme.config.RagConfig#
Configuration for retrieval-augmented generation support.
- class gptme.config.UserConfig#
User-level configuration, such as user-specific prompts and environment variables.
- class gptme.config.UserPromptConfig#
User-level configuration for user-specific prompts and project descriptions.
- gptme.config.get_project_config(workspace: Path | None) ProjectConfig | None #
Get a cached copy of or load the project configuration from a gptme.toml file in the workspace or .github directory.
Run
reload_config()
orConfig.from_workspace()
to reset cache and reload the project config.
- gptme.config.load_user_config(path: str | None = None) UserConfig #
Load the user configuration from the config file.
- gptme.config.set_config_from_workspace(workspace: Path)#
Set the configuration to use a specific workspace, possibly having a project config.
- gptme.config.setup_config_from_cli(workspace: Path, logdir: Path, model: str | None = None, tool_allowlist: str | None = None, tool_format: ToolFormat | None = None, stream: bool = True, interactive: bool = True, agent_path: Path | None = None) Config #
Initialize and return a complete config from CLI arguments and workspace.
Handles the precedence: CLI args -> saved conversation config -> env vars -> config files -> defaults
prompts#
See Prompts for more information.
tools#
Supporting classes and functions for creating and using tools.
- class gptme.tools.Parameter#
A wrapper for function parameters to convert them to JSON schema.
- class gptme.tools.ToolSpec#
Tool specification. Defines a tool that can be used by the agent.
- Parameters:
name – The name of the tool.
desc – A description of the tool.
instructions – Instructions on how to use the tool.
instructions_format – Per tool format instructions when needed.
examples – Example usage of the tool.
functions – Functions registered in the IPython REPL.
init – An optional function that is called when the tool is first loaded.
execute – An optional function that is called when the tool executes a block.
block_types – A list of block types that the tool will execute.
available – Whether the tool is available for use.
parameters – Descriptor of parameters use by this tool.
load_priority – Influence the loading order of this tool. The higher the later.
disabled_by_default – Whether this tool should be disabled by default.
hooks – Hooks to register when this tool is loaded.
- __init__(name: str, desc: str, instructions: str = '', instructions_format: dict[str, str] = <factory>, examples: str | ~collections.abc.Callable[[str], str] = '', functions: list[~collections.abc.Callable] | None = None, init: ~collections.abc.Callable[[], ~gptme.tools.base.ToolSpec] | None = None, execute: ~gptme.tools.base.ExecuteFuncGen | ~gptme.tools.base.ExecuteFuncMsg | None = None, block_types: list[str] = <factory>, available: bool | ~collections.abc.Callable[[], bool] = True, parameters: list[~gptme.tools.base.Parameter] = <factory>, load_priority: int = 0, disabled_by_default: bool = False, is_mcp: bool = False, hooks: dict[str, tuple[str, ~collections.abc.Callable, int]] = <factory>, commands: dict[str, ~collections.abc.Callable] = <factory>) None #
- class gptme.tools.ToolUse#
ToolUse(tool: str, args: list[str] | None, content: str | None, kwargs: dict[str, str] | None = None, call_id: str | None = None, start: int | None = None)
- __init__(tool: str, args: list[str] | None, content: str | None, kwargs: dict[str, str] | None = None, call_id: str | None = None, start: int | None = None) None #
- execute(confirm: Callable[[str], bool]) Generator[Message, None, None] #
Executes a tool-use tag and returns the output.
- classmethod iter_from_content(content: str, tool_format_override: Literal['markdown', 'xml', 'tool'] | None = None, streaming: bool = False) Generator[ToolUse, None, None] #
Returns all ToolUse in a message, markdown or XML, in order.
- Parameters:
content – The message content to parse
tool_format_override – Optional tool format override
streaming – If True, requires blank line after code blocks for completion
server#
See Server for more information.
Server for gptme.