Security Considerations#
gptme is a powerful tool that can execute code and interact with your system. This document outlines security considerations and best practices for running gptme safely.
Warning
gptme is designed to execute arbitrary code on your system. Always review commands before confirming execution, especially when using --non-interactive mode.
Threat Model#
gptme operates with the same permissions as the user running it. This means it can:
Read and write files accessible to your user
Execute shell commands
Access network resources
Interact with external APIs using configured credentials
Key principle: gptme should be run in environments where the user trusts the LLM’s outputs, or where outputs are carefully reviewed before execution.
Tool-Specific Security Notes#
Shell Tool#
The shell tool executes commands directly in a bash shell. All commands are logged and, in interactive mode, require user confirmation.
Recommendations:
Review commands before execution
Use
--non-interactiveonly in controlled environmentsConsider running in a container or VM for untrusted workloads
Browser Tool#
The browser tool can access web resources. Security measures include:
URL scheme validation: Only
http://andhttps://URLs are permitted in the lynx backendPlaywright backend: Uses browser sandboxing
Note: Be cautious about SSRF risks when the LLM can control URLs.
Screenshot Tool#
The screenshot tool captures screen content and saves to files. Security measures include:
Path validation: Screenshots are restricted to the configured output directory
Path traversal protection: Attempts to write outside the output directory are blocked
Python Tool#
The Python/IPython tool executes arbitrary Python code.
Important: This is intentionally powerful and can execute any code. Use with appropriate caution.
Save/Patch Tools#
These tools write files to disk. Current limitations:
Can write to any location accessible by the user
Path traversal is possible
Recommendation: Review file paths carefully before confirming file operations.
Best Practices#
For Interactive Use#
Always review commands before confirming execution
Check file paths when saving or modifying files
Be cautious with URLs - verify domains before allowing browser access
Use credential isolation - don’t expose sensitive credentials in prompts
For Automated/Non-Interactive Use#
Run in isolation - use containers, VMs, or sandboxed environments
Limit permissions - run as a restricted user when possible
Monitor activity - log all tool executions for audit
Use timeouts - prevent runaway processes with appropriate timeouts
Validate inputs - sanitize any external inputs before passing to gptme
Docker Isolation#
For enhanced security, gptme-eval supports Docker isolation:
gptme-eval --use-docker
This runs evaluations in isolated containers with limited filesystem access.
Reporting Security Issues#
If you discover a security vulnerability in gptme, please report it responsibly:
Do not open a public issue for security vulnerabilities
Contact the maintainers directly via email or private disclosure
Allow reasonable time for the issue to be addressed before public disclosure
See SECURITY.md in the repository for detailed reporting instructions.