All docs

Config File Reference

Squeezr uses TOML for configuration. This page is the complete reference for every key in the config file, organized by section.

File locations

# Global config — next to the installed binary (in npm global prefix)
squeezr.toml

# Project config — deep-merged over global, apply per-repo overrides
.squeezr.toml   (in your project root)

Use squeezr config to print the resolved path and current values.

[proxy]

Controls the proxy server ports.

KeyTypeDefaultDescription
portinteger8080HTTP proxy port (Claude Code, Aider, Gemini CLI).
mitm_portinteger8081MITM proxy port (Codex). Defaults to port + 1.

[compression]

Controls how and when content is compressed.

KeyTypeDefaultDescription
thresholdinteger800Minimum content size (chars) to trigger compression.
keep_recentinteger3Last N tool results to leave uncompressed.
compress_system_promptbooleantrueCompress and cache the system prompt.
compress_conversationbooleanfalseAlso compress assistant messages (aggressive mode).
skip_toolsarray[]Tool names to never compress (e.g. ["Read"]).
only_toolsarray[]Only compress these tools, skip all others (e.g. ["Bash"]).

[cache]

Controls in-process caching of compressed results.

KeyTypeDefaultDescription
enabledbooleantrueEnable the cache.
max_entriesinteger1000Maximum number of cached compressed results.

[adaptive]

Adaptive pressure automatically increases compression aggressiveness as the context window fills up.

KeyTypeDefaultDescription
enabledbooleantrueEnable adaptive compression.
low_thresholdinteger1500Min chars to compress when context is below 50%.
mid_thresholdinteger800Min chars to compress when context is 50–75%.
high_thresholdinteger400Min chars to compress when context is 75–90%.
critical_thresholdinteger150Min chars to compress when context exceeds 90%. Git diff context set to 0.

[local]

Configuration for local model servers (Ollama) used as the compression backend.

KeyTypeDefaultDescription
enabledbooleantrueEnable local model support.
upstream_urlstring"http://localhost:11434"URL of the local model server.
compression_modelstring"qwen2.5-coder:1.5b"Local model to use for AI compression.

Full example

# squeezr.toml

[proxy]
port = 8080
mitm_port = 8081

[compression]
threshold = 800
keep_recent = 3
compress_system_prompt = true
compress_conversation = false
# skip_tools = ["Read"]
# only_tools = ["Bash"]

[cache]
enabled = true
max_entries = 1000

[adaptive]
enabled = true
low_threshold = 1500
mid_threshold = 800
high_threshold = 400
critical_threshold = 150

[local]
enabled = true
upstream_url = "http://localhost:11434"
compression_model = "qwen2.5-coder:1.5b"