PinnedPublished inDecoding MLAn End-to-End Framework for Production-Ready LLM Systems by Building Your LLM TwinFrom data gathering to productionizing LLMs using LLMOps good practices.Mar 16, 202414Mar 16, 202414
PinnedPublished inDecoding MLThe LLMs kit: Build a production-ready real-time financial advisor system using streaming…Lesson 1: LLM architecture system design using the 3-pipeline patternJan 5, 2024Jan 5, 2024
PinnedPublished inTowards Data ScienceA Framework for Building a Production-Ready Feature Engineering PipelineLesson 1: Batch Serving. Feature Stores. Feature Engineering Pipelines.Apr 28, 202311Apr 28, 202311
Published inDecoding MLThe Ultimate Prompt Monitoring PipelineMaster monitoring complex traces and evaluation while in productionNov 30, 2024Nov 30, 2024
Published inDecoding MLBeyond Proof of Concept: Building RAG Systems That ScaleA hands-on guide to architecting production LLM inference pipelines with AWS SageMakerNov 18, 2024Nov 18, 2024
Published inDecoding MLThe Engineer’s Framework for LLM & RAG EvaluationStop guessing if your LLM works: A hands-on guide to measuring what mattersNov 18, 2024Nov 18, 2024
Published inDecoding ML8B Parameters, 1 GPU, No Problems: The Ultimate LLM Fine-tuning PipelineMaster production-ready fine-tuning with AWS SageMaker, Unsloth, and MLOps best practicesNov 18, 2024Nov 18, 2024
Published inDecoding MLTurning Raw Data Into Fine-Tuning DatasetsHow to automatically generate instruction datasets for fine-tuning LLMs on custom dataNov 18, 2024Nov 18, 2024
Published inDecoding MLI Replaced 1000 Lines of Polling Code with 50 Lines of CDC MagicThe MongoDB + RabbitMQ stack that's revolutionizing LLM data pipelinesNov 18, 20241Nov 18, 20241
Published inDecoding MLYour Content is Gold: I Turned 3 Years of Blog Posts into an LLM TrainingA practical guide to building custom instruction datasets for fine-tuning LLMsNov 18, 2024Nov 18, 2024