Llm

LLM Routing

LLM Routing

In this blog, we will learn about LLM Routing, why it matters, and how to send each user query to the right LLM based on cost, latency, and quality.

Context Engineering

Context Engineering

In this blog, we will learn about Context Engineering - what it is, why it has become the most important skill for building reliable AI applications, how it differs from Prompt Engineering, the components that make up the context, common patterns like RAG, few-shot examples, tools, and memory, and the best practices and common mistakes to keep in mind.

Reflection Agent

Reflection Agent

In this blog, we will learn about the Reflection Agent - what it is, how it is built, its anatomy, how it generates, critiques, and revises its own work, and how to handle its common failure modes.

Speculative Decoding

Speculative Decoding

In this blog, we will learn about Speculative Decoding - what it is, why LLM generation is slow without it, how a small draft model and a big target model work together to produce tokens faster, the rejection sampling math that guarantees no quality loss, real numbers showing the 2x to 3x speedup, where it is used in production, and the trade-offs to watch out for.

GraphRAG

GraphRAG

In this blog, we will learn about GraphRAG and how it improves retrieval by using a knowledge graph along with vector search.

Plan-and-Execute Agent

Plan-and-Execute Agent

In this blog, we will learn about the Plan-and-Execute Agent - what it is, its anatomy, how it plans and runs the steps, how it differs from a ReAct Agent, and how to handle its common failure modes.

Agentic RAG

Agentic RAG

In this blog, we will learn about Agentic RAG - what it is, why standard RAG falls short, the agentic RAG loop, the three building blocks, the common patterns, when to use it, and the limitations to keep in mind.

ReAct Agent

ReAct Agent

In this blog, we will learn about the ReAct Agent - what it is, how it is built, its anatomy, how it thinks and acts, and how to handle its common failure modes.

Multi-Agent Systems

Multi-Agent Systems

In this blog, we will learn about Multi-Agent Systems - what they are, the three pillars that hold them together, the common agent roles, how agents communicate and coordinate, the trade-offs, and when to use them.

AI Agent Memory

AI Agent Memory

In this blog, we will learn about AI Agent Memory - why agents need it, the memory stack, the four core operations (write, read, update, forget), how memory flows at runtime, and the common mistakes.

AI Agent Explained

AI Agent Explained

In this blog, we will learn about the AI Agent - what it is, how it is different from a plain LLM, its five core parts, how it works end to end, the main types, and when to use one.

RMSNorm (Root Mean Square Layer Normalization)

RMSNorm (Root Mean Square Layer Normalization)

In this blog, we will learn about RMSNorm, a faster and simpler alternative to Layer Normalization that powers most modern Large Language Models like Llama, Mistral, Gemma, Qwen, PaLM, and DeepSeek.

Decoding DeepSeek-V4

Decoding DeepSeek-V4

In this blog, we will learn about DeepSeek-V4, the new family of open Mixture-of-Experts language models that natively supports a one-million-token context with dramatically lower inference cost.

LoRA - Low-Rank Adaptation of LLMs

LoRA - Low-Rank Adaptation of LLMs

In this blog, we will learn about LoRA - Low-Rank Adaptation of Large Language Models.

Math Behind RoPE (Rotary Position Embedding)

Math Behind RoPE (Rotary Position Embedding)

In this blog, we will learn about the math behind Rotary Position Embedding (RoPE) and why it is used in modern Large Language Models.

Grouped Query Attention

Grouped Query Attention

In this blog, we will learn about Grouped-Query Attention (GQA) and how it differs from Multi-Head Attention (MHA).

Math Behind Cross-Entropy Loss

Math Behind Cross-Entropy Loss

In this blog, we will learn about the math behind Cross-Entropy Loss with a step-by-step numeric example.

Math Behind Gradient Descent

Math Behind Gradient Descent

In this blog, we will learn about the math behind gradient descent with a step-by-step numeric example.

Feed-Forward Networks in LLMs

Feed-Forward Networks in LLMs

In this blog, we will learn about Feed-Forward Networks in LLMs - understanding what they are, how they work inside the Transformer architecture, why every Transformer layer needs one, and what role they play in making Large Language Models so powerful.

Decoding Flash Attention in LLMs

Decoding Flash Attention in LLMs

In this blog, we will learn about Flash Attention by decoding it piece by piece - understanding why standard attention is slow, what makes Flash Attention fast, how it uses GPU memory cleverly, and why it is used in almost every modern Large Language Model (LLM).

Mixture of Experts Explained

Mixture of Experts Explained

In this blog, we will learn about the Mixture of Experts (MoE) architecture - understanding what experts are, how the router picks them, why MoE makes large models faster and cheaper, and why it powers many of today''s most powerful Large Language Models (LLMs).

Decoding Transformer Architecture

Decoding Transformer Architecture

In this blog, we will learn about the Transformer architecture by decoding it piece by piece - understanding what each component does, how they work together, and why this architecture powers every modern Large Language Model (LLM)

Math Behind Backpropagation

Math Behind Backpropagation

In this blog, we will learn about the math behind backpropagation in neural networks.

Math behind √dₖ Scaling Factor in Attention

Math behind √dₖ Scaling Factor in Attention

In this blog, we will learn about why we scale the dot product attention by √dₖ in the Transformer architecture with a step-by-step numeric example.

Math behind Attention - Q, K, and V

Math behind Attention - Q, K, and V

In this blog, we will learn about the math behind Attention - Query(Q), Key(K), and Value(V) with a step-by-step numeric example.

Harness Engineering in AI

Harness Engineering in AI

In this blog, we will learn about Harness Engineering in AI.

Byte Pair Encoding in LLMs

Byte Pair Encoding in LLMs

In this blog, we will learn about BPE (Byte Pair Encoding) - the tokenization algorithm used by most modern Large Language Models (LLMs) to break text into smaller pieces before processing it.

Paged Attention in LLMs

Paged Attention in LLMs

In this blog, we will learn about Paged Attention, a technique that solves the memory waste problem of KV Cache, allowing LLMs to serve many more users at the same time.

KV Cache in LLMs

KV Cache in LLMs

In this blog, we will learn about KV Cache - where K stands for Key and V stands for Value - and why it is used in Large Language Models (LLMs) to speed up text generation.

Causal Masking in Attention

Causal Masking in Attention

In this blog, we will learn about the Causal Masking in Attention.