Security Considerations#

gptme is a powerful tool that can execute code and interact with your system. This document outlines security considerations and best practices for running gptme safely.

Warning

gptme is designed to execute arbitrary code on your system. Always review commands before confirming execution, especially when using --non-interactive mode.

Threat Model#

gptme operates with the same permissions as the user running it. This means it can:

  • Read and write files accessible to your user

  • Execute shell commands

  • Access network resources

  • Interact with external APIs using configured credentials

Key principle: gptme should be run in environments where the user trusts the LLM’s outputs, or where outputs are carefully reviewed before execution.

Tool-Specific Security Notes#

Shell Tool#

The shell tool executes commands directly in a bash shell. All commands are logged and, in interactive mode, require user confirmation.

Recommendations:

  • Review commands before execution

  • Use --non-interactive only in controlled environments

  • Consider running in a container or VM for untrusted workloads

Browser Tool#

The browser tool can access web resources. Security measures include:

  • URL scheme validation: Only http:// and https:// URLs are permitted in the lynx backend

  • Playwright backend: Uses browser sandboxing

Note: Be cautious about SSRF risks when the LLM can control URLs.

Screenshot Tool#

The screenshot tool captures screen content and saves to files. Security measures include:

  • Path validation: Screenshots are restricted to the configured output directory

  • Path traversal protection: Attempts to write outside the output directory are blocked

Python Tool#

The Python/IPython tool executes arbitrary Python code.

Important: This is intentionally powerful and can execute any code. Use with appropriate caution.

Save/Patch Tools#

These tools write files to disk. Current limitations:

  • Can write to any location accessible by the user

  • Path traversal is possible

Recommendation: Review file paths carefully before confirming file operations.

Best Practices#

For Interactive Use#

  1. Always review commands before confirming execution

  2. Check file paths when saving or modifying files

  3. Be cautious with URLs - verify domains before allowing browser access

  4. Use credential isolation - don’t expose sensitive credentials in prompts

For Automated/Non-Interactive Use#

  1. Run in isolation - use containers, VMs, or sandboxed environments

  2. Limit permissions - run as a restricted user when possible

  3. Monitor activity - log all tool executions for audit

  4. Use timeouts - prevent runaway processes with appropriate timeouts

  5. Validate inputs - sanitize any external inputs before passing to gptme

Docker Isolation#

For enhanced security, gptme-eval supports Docker isolation:

gptme-eval --use-docker

This runs evaluations in isolated containers with limited filesystem access.

Reporting Security Issues#

If you discover a security vulnerability in gptme, please report it responsibly:

  1. Do not open a public issue for security vulnerabilities

  2. Contact the maintainers directly via email or private disclosure

  3. Allow reasonable time for the issue to be addressed before public disclosure

See SECURITY.md in the repository for detailed reporting instructions.