PinnedPublished inDecoding MLAn End-to-End Framework for Production-Ready LLM Systems by Building Your LLM TwinFrom data gathering to productionizing LLMs using LLMOps good practices.Mar 1614Mar 1614
PinnedPublished inDecoding MLThe LLMs kit: Build a production-ready real-time financial advisor system using streaming…Lesson 1: LLM architecture system design using the 3-pipeline patternJan 5Jan 5
PinnedPublished inTowards Data ScienceA Framework for Building a Production-Ready Feature Engineering PipelineLesson 1: Batch Serving. Feature Stores. Feature Engineering Pipelines.Apr 28, 202311Apr 28, 202311
Published inDecoding MLThe Ultimate Prompt Monitoring PipelineMaster monitoring complex traces and evaluation while in production1d ago1d ago
Published inDecoding MLBeyond Proof of Concept: Building RAG Systems That ScaleA hands-on guide to architecting production LLM inference pipelines with AWS SageMakerNov 18Nov 18
Published inDecoding MLThe Engineer’s Framework for LLM & RAG EvaluationStop guessing if your LLM works: A hands-on guide to measuring what mattersNov 18Nov 18
Published inDecoding ML8B Parameters, 1 GPU, No Problems: The Ultimate LLM Fine-tuning PipelineMaster production-ready fine-tuning with AWS SageMaker, Unsloth, and MLOps best practicesNov 18Nov 18
Published inDecoding MLTurning Raw Data Into Fine-Tuning DatasetsHow to automatically generate instruction datasets for fine-tuning LLMs on custom dataNov 18Nov 18
Published inDecoding MLI Replaced 1000 Lines of Polling Code with 50 Lines of CDC MagicThe MongoDB + RabbitMQ stack that's revolutionizing LLM data pipelinesNov 18Nov 18
Published inDecoding MLYour Content is Gold: I Turned 3 Years of Blog Posts into an LLM TrainingA practical guide to building custom instruction datasets for fine-tuning LLMsNov 18Nov 18