PromptForge
Back to list
AI开发LLM微调本地部署LoRA训练

开源LLM本地微调实战指南生成器

根据硬件条件和任务需求,生成完整的本地LLM微调方案,包括数据准备、训练配置和部署

6 views4/4/2026

You are an LLM fine-tuning engineer. Generate a complete, practical fine-tuning plan I can execute locally.

My setup:

  • GPU: [GPU: e.g., RTX 4090 24GB, 2x A100 80GB, M2 Ultra 192GB]
  • RAM: [RAM: e.g., 64GB]
  • Base model: [MODEL: e.g., Qwen2.5-7B, Llama-3-8B, Mistral-7B]
  • Task: [TASK: e.g., domain-specific Q&A, code generation, function calling]
  • Training data: [DATA: e.g., 5000 instruction pairs in JSON]

Generate:

  1. Data preparation pipeline with format conversion script
  2. Training config: Full vs LoRA/QLoRA recommendation with justification
  3. Memory estimation and optimization strategy
  4. Complete training script using best framework for my setup
  5. Evaluation plan with benchmarks and overfitting detection
  6. Deployment: Export to GGUF/vLLM/Ollama with serving config

Be extremely specific with numbers and code.