15/11 11:40HS @ Auditorio Nationale Nederlander TRACK DATA SCIENCE / AI
Training a brain: get the RAG ready for an AI that knows what it says?
Speaker: Daniele Mario Areddu
Over the past few months, I've been working on a project where I needed a generative model to respond accurately within a very specific domain. The goal wasn't to generate generic text, but rather to obtain coherent, relevant responses based on reliable content. In this talk, I'll share what I've learned by comparing two different approaches: fine-tuning, where the model is retrained with a custom dataset, and the RAG (Retrieval-Augmented Generation) approach, which enriches responses by retrieving information from an external knowledge base. We'll discuss: • When to perform fine-tuning and when, instead, it's better to opt for a RAG approach. • An overview of the workflow for both approaches: from dataset preparation to cloud infrastructure and Python scripts. • The costs, complexities, and advantages of each option. • How control over content, updateability, and the interpretability of responses differ.