Auto-Condense
Automatic context management through LLM-based condensation
What is it
Auto-condense is a context management system that automatically compresses conversation history when it approaches the model's context window limit.
Instead of losing early messages, Skycode uses an LLM to create a concise summary of the previous context. This enables long sessions without losing important information.
How it works
- Skycode tracks token count in the current session
- When context approaches the threshold, condensation triggers
- The LLM analyzes the history and creates a compact summary: key decisions, modified files, current task state
- The summary replaces old messages — recent messages remain as-is
When to enable
- Long sessions — refactoring, migration, series of related tasks
- Models with small context — 8K–32K tokens
- Complex projects — many files, frequent context switches
When not needed
- Short tasks (1-2 messages)
- Models with large context (200K+) on small tasks
Configuration
Settings → Context → Auto-Condense
Threshold
The trigger threshold is set via the context window bar in the task header. Click on the bar to set a marker — when context usage exceeds this mark, automatic condensation will occur.
Manual condensation
In addition to automatic condensation, you can trigger it manually — the AI uses the condense tool to summarize the current context at any point.