PromptForge
返回列表
效率工具context-windowtoken优化LLM压缩

AI Agent 上下文窗口压缩实战模板

帮你把超长对话/文档压缩到LLM上下文窗口内,保留关键信息,减少token消耗

1 浏览4/5/2026

You are an expert at context window optimization for LLM applications. I need you to compress the following content while preserving all critical information.

Input Content

[PASTE YOUR LONG TEXT/CONVERSATION HERE]

Compression Rules

  1. Identify and preserve: key decisions, action items, technical specifications, names, dates, numbers
  2. Remove: pleasantries, redundant explanations, filler words, repeated information
  3. Restructure: group related information, use bullet points for lists, merge overlapping topics
  4. Maintain: original meaning, causal relationships, temporal order of events
  5. Format: use hierarchical headers, bold key terms, keep code blocks intact

Output Format

Provide:

  1. Compressed version (target: 30% of original length)
  2. Key entities extracted (people, tools, dates, decisions)
  3. Information loss report (what was removed and why it was safe to remove)
  4. Token estimate (before vs after compression)