PinnedPublished inDecoding MLAn End-to-End Framework for Production-Ready LLM Systems by Building Your LLM TwinFrom data gathering to productionizing LLMs using LLMOps good practices.Mar 1614Mar 1614
PinnedPublished inDecoding MLThe LLMs kit: Build a production-ready real-time financial advisor system using streaming…Lesson 1: LLM architecture system design using the 3-pipeline patternJan 5Jan 5
PinnedPublished inTowards Data ScienceA Framework for Building a Production-Ready Feature Engineering PipelineLesson 1: Batch Serving. Feature Stores. Feature Engineering Pipelines.Apr 28, 202311Apr 28, 202311
Published inDecoding MLBeyond Proof of Concept: Building RAG Systems That ScaleA hands-on guide to architecting production LLM inference pipelines with AWS SageMaker2d ago2d ago
Published inDecoding MLThe Engineer’s Framework for LLM & RAG EvaluationStop guessing if your LLM works: A hands-on guide to measuring what matters2d ago2d ago
Published inDecoding ML8B Parameters, 1 GPU, No Problems: The Ultimate LLM Fine-tuning PipelineMaster production-ready fine-tuning with AWS SageMaker, Unsloth, and MLOps best practices2d ago2d ago
Published inDecoding MLTurning Raw Data Into Fine-Tuning DatasetsHow to automatically generate instruction datasets for fine-tuning LLMs on custom data2d ago2d ago
Published inDecoding MLI Replaced 1000 Lines of Polling Code with 50 Lines of CDC MagicThe MongoDB + RabbitMQ stack that's revolutionizing LLM data pipelines2d ago2d ago
Published inDecoding MLYour Content is Gold: I Turned 3 Years of Blog Posts into an LLM TrainingA practical guide to building custom instruction datasets for fine-tuning LLMs2d ago2d ago
Published inDecoding MLConnecting the dots in data and AI systemsSimplifying MLE & MLOps with the FTI ArchitectureOct 31Oct 31