Setup
Run squeezr setup once after installation. It configures everything automatically — env vars, shell wrapper, auto-start, and TLS certificates. You don't need to edit shell profiles or set environment variables manually.
squeezr setup
squeezr startThat's all that is required for Claude Code, Aider, Gemini CLI, and Ollama. Codex requires one additional per-session step described below.
What setup does
Environment variables
Sets the following vars in your user environment (Windows registry / macOS/Linux ~/.bashrc or ~/.zshrc):
ANTHROPIC_BASE_URL=http://localhost:8080
GEMINI_API_BASE_URL=http://localhost:8080Your existing API keys are not touched. Squeezr forwards them to the upstream API automatically.
Shell wrapper
Because child processes cannot modify the parent shell's environment, setup installs a persistent wrapper function so env vars refresh in the current terminal without restarting it:
- Windows: function added to PowerShell
$PROFILE - Linux / macOS / WSL: function added to
~/.bashrcor~/.zshrc
After setup runs, open a new terminal once (or source your profile) and the wrapper will be active in all future sessions.
Auto-start
Registers Squeezr to start automatically after a reboot:
- Windows: Task Scheduler or NSSM
- Linux: systemd user service
- macOS: launchd agent
TLS certificates (for Codex MITM)
- Windows: imports the MITM CA into the Windows Certificate Store at user level (no admin required)
- macOS/Linux/WSL: generates a CA bundle at
~/.squeezr/mitm-ca/bundle.crtand setsNODE_EXTRA_CA_CERTS
Tool-specific notes
Claude Code
Works automatically after setup. Claude Code reads ANTHROPIC_BASE_URL and routes all API calls through the proxy. No further configuration needed.
Aider
Set ANTHROPIC_BASE_URL (already done by setup) for Anthropic models, or configure openai_base_url in your .aider.conf.yml for OpenAI models:
# .aider.conf.yml
openai_base_url: http://localhost:8080Gemini CLI
Works automatically after setup. Gemini CLI reads GEMINI_API_BASE_URL.
Ollama
Squeezr detects Ollama automatically when it sees a dummy API key (e.g. ollama) or a local upstream URL. No extra configuration needed if you are using Ollama with a tool that already targets the proxy.
To use a local model for compression itself, configure in squeezr.toml:
[local]
enabled = true
upstream_url = "http://localhost:11434"
compression_model = "qwen2.5-coder:1.5b"Codex
Codex uses WebSocket over TLS to chatgpt.com and cannot be proxied via a simple base URL override. Squeezr runs a TLS-terminating MITM proxy on port 8081.
Set HTTPS_PROXY only in the terminal session where you run Codex— do not set it globally as it will break other tools:
# Run this in the terminal where you launch Codex — not globally
HTTPS_PROXY=http://localhost:8081 codexSee the Codex guide for the full technical breakdown.
Changing ports
To change the HTTP proxy port (default 8080) or the MITM proxy port (default 8081):
squeezr portsOr edit squeezr.toml directly:
[proxy]
port = 9090
mitm_port = 9091Verify the connection
squeezr status