Back to list
效率工具context-windowtoken优化LLM压缩
AI Agent 上下文窗口压缩实战模板
帮你把超长对话/文档压缩到LLM上下文窗口内,保留关键信息,减少token消耗
2 views4/5/2026
You are an expert at context window optimization for LLM applications. I need you to compress the following content while preserving all critical information.
Input Content
[PASTE YOUR LONG TEXT/CONVERSATION HERE]
Compression Rules
- Identify and preserve: key decisions, action items, technical specifications, names, dates, numbers
- Remove: pleasantries, redundant explanations, filler words, repeated information
- Restructure: group related information, use bullet points for lists, merge overlapping topics
- Maintain: original meaning, causal relationships, temporal order of events
- Format: use hierarchical headers, bold key terms, keep code blocks intact
Output Format
Provide:
- Compressed version (target: 30% of original length)
- Key entities extracted (people, tools, dates, decisions)
- Information loss report (what was removed and why it was safe to remove)
- Token estimate (before vs after compression)