Ousso11 aa29d621e0 release: v0.5.3 - MiniMax provider, performance optimizations, and stability improvements
## New Features
- Add MiniMax provider support (MiniMax-M2.5/highspeed, 204K context)
- Add expand_context_calls telemetry logging for compression analytics
- Token-based compression thresholds for smarter min_bytes decisions

## Performance
- Memory/CPU optimization with pre-allocated slices
- Circuit breaker extraction into shared package for reliability
- Unified phantom tool operations through adapter interface

## Bug Fixes
- Fix dashboard savings display and registry race conditions
- Fix deferred tool loop edge cases in MCP scenarios
- Fix expand_context validation edge cases

## Improvements
- Consolidated auth capture across providers
- Enhanced dashboard UX with better session tracking
- Pipeline improvements for tool discovery and output compression
2026-03-17 21:11:14 -07:00
2026-02-10 06:23:04 -08:00

Compresr

Instant history compaction and context optimization for AI agents

WebsiteDocsDiscord


Context Gateway

Compresr is a YC-backed company building LLM prompt compression and context optimization.

Context Gateway sits between your AI agent (Claude Code, Cursor, etc.) and the LLM API. When your conversation gets too long, it compresses history in the background so you never wait for compaction.

Quick Start

# Install gateway binary
curl -fsSL https://compresr.ai/api/install | sh

# Then select an agent (opens interactive TUI wizard)
context-gateway

The TUI wizard will help you:

  • Choose an agent (claude_code, cursor, openclaw, or custom)
  • Create/edit configuration:
    • Summarizer model and api key
    • enable slack notifications if needed
    • Set trigger threshold for compression (default: 75%)

Supported agents:

  • claude_code: Claude Code IDE integration
  • cursor: Cursor IDE integration
  • openclaw: Open-source Claude Code alternative
  • custom: Bring your own agent configuration

What you'll notice

  • No more waiting when conversation hits context limits
  • Compaction happens instantly (summary was pre-computed in background)
  • Check logs/history_compaction.jsonl to see what's happening

Contributing

We welcome contributions! Please join our Discord to contribute.

Description
Mirror of Compresr-ai/Context-Gateway (GitHub)
Readme Apache-2.0 2.8 MiB
Languages
Go 90.5%
TypeScript 6.7%
Shell 2.4%
Makefile 0.3%