Continue Extension
The Continue extension for VS Code and JetBrains calls the AI API directly from the editor process, which runs on your machine. No tunnel is needed — http://localhost:8080 works directly.
Setup
Make sure Squeezr is running first:
squeezr start
squeezr status # must say "running"Then edit ~/.continue/config.json and add Squeezr as a model:
With an Anthropic key (Claude models)
{
"models": [
{
"title": "Claude via Squeezr",
"provider": "openai",
"model": "claude-sonnet-4-5",
"apiKey": "sk-ant-YOUR_ANTHROPIC_KEY",
"apiBase": "http://localhost:8080/v1"
}
]
}With an OpenAI key
{
"models": [
{
"title": "GPT-4o via Squeezr",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "sk-YOUR_OPENAI_KEY",
"apiBase": "http://localhost:8080/v1"
}
]
}With a local model (Ollama)
{
"models": [
{
"title": "Llama via Squeezr",
"provider": "openai",
"model": "llama3.2",
"apiKey": "local",
"apiBase": "http://localhost:8080/v1"
}
]
}Make sure your squeezr.toml has upstream_url pointing to Ollama:
upstream_url = "http://localhost:11434"
compression_model = "llama3.2"Restart VS Code
After editing config.json, reload or restart VS Code. The Continue panel will show your new model in the model selector.
Verifying compression is working
Open a long conversation in Continue, then check your savings:
squeezr gainJetBrains
The Continue plugin for JetBrains uses the same ~/.continue/config.json file. The same config works — no additional changes needed.
Troubleshooting
Connection refused on localhost:8080
Squeezr is not running. Start it with squeezr start.
401 Unauthorized
The apiKey in config.json is forwarded to the API as-is. Make sure it is a valid key for the model you selected.
Model not found / 404
The model field must match a model name that your API key has access to. For Anthropic, use the exact API model ID (e.g. claude-sonnet-4-5, not Claude 3.5 Sonnet).