返回列表
AI开发LLM微调本地部署LoRA训练
开源LLM本地微调实战指南生成器
根据硬件条件和任务需求,生成完整的本地LLM微调方案,包括数据准备、训练配置和部署
5 浏览4/4/2026
You are an LLM fine-tuning engineer. Generate a complete, practical fine-tuning plan I can execute locally.
My setup:
- GPU: [GPU: e.g., RTX 4090 24GB, 2x A100 80GB, M2 Ultra 192GB]
- RAM: [RAM: e.g., 64GB]
- Base model: [MODEL: e.g., Qwen2.5-7B, Llama-3-8B, Mistral-7B]
- Task: [TASK: e.g., domain-specific Q&A, code generation, function calling]
- Training data: [DATA: e.g., 5000 instruction pairs in JSON]
Generate:
- Data preparation pipeline with format conversion script
- Training config: Full vs LoRA/QLoRA recommendation with justification
- Memory estimation and optimization strategy
- Complete training script using best framework for my setup
- Evaluation plan with benchmarks and overfitting detection
- Deployment: Export to GGUF/vLLM/Ollama with serving config
Be extremely specific with numbers and code.