Fine-Tuning
AIFine-tuning is training a pre-trained model (e.g., an LLM) on additional, task-specific data so it performs better on that task or domain. It typically requires labeled data and compute. DocLD’s extraction and RAG are designed to work well with zero-shot and few-shot prompting rather than requiring fine-tuning.
When Fine-Tuning Is Used
- Domain adaptation — Improve accuracy for a narrow domain (e.g., legal, medical).
- Structured output — Train the model to follow a specific output format.
- Latency/cost — Smaller fine-tuned models can sometimes match larger base models for a fixed task.
DocLD focuses on prompt engineering, schemas, and ground truth to improve extraction and chat without fine-tuning the underlying LLM.
Related Concepts
Fine-tuning is an alternative to few-shot and zero-shot extraction. Inference uses the trained (or base) model; prompt engineering shapes behavior without retraining.