
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      <title>Outcome School | Get High Paying Tech Job</title>
      <link>https://outcomeschool.com/blog</link>
      <description>Software engineers like you join Outcome School to achieve the outcome that is a high-paying tech job.</description>
      <language>en-us</language>
      <managingEditor>teamoutcomeschool@gmail.com (Outcome School)</managingEditor>
      <webMaster>teamoutcomeschool@gmail.com (Outcome School)</webMaster>
      <lastBuildDate>Mon, 20 Apr 2026 00:00:00 GMT</lastBuildDate>
      <atom:link href="https://outcomeschool.com/tags/math/feed.xml" rel="self" type="application/rss+xml"/>
      
  <item>
    <guid>https://outcomeschool.com/blog/math-behind-cross-entropy-loss</guid>
    <title>Math Behind Cross-Entropy Loss</title>
    <link>https://outcomeschool.com/blog/math-behind-cross-entropy-loss</link>
    <description>In this blog, we will learn about the math behind Cross-Entropy Loss with a step-by-step numeric example.</description>
    <pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate>
    <author>teamoutcomeschool@gmail.com (Outcome School)</author>
    <category>math</category><category>llm</category><category>ai</category><category>machine-learning</category>
  </item>

  <item>
    <guid>https://outcomeschool.com/blog/math-behind-gradient-descent</guid>
    <title>Math Behind Gradient Descent</title>
    <link>https://outcomeschool.com/blog/math-behind-gradient-descent</link>
    <description>In this blog, we will learn about the math behind gradient descent with a step-by-step numeric example.</description>
    <pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate>
    <author>teamoutcomeschool@gmail.com (Outcome School)</author>
    <category>math</category><category>llm</category><category>ai</category><category>machine-learning</category>
  </item>

  <item>
    <guid>https://outcomeschool.com/blog/math-behind-backpropagation</guid>
    <title>Math Behind Backpropagation</title>
    <link>https://outcomeschool.com/blog/math-behind-backpropagation</link>
    <description>In this blog, we will learn about the math behind backpropagation in neural networks.</description>
    <pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate>
    <author>teamoutcomeschool@gmail.com (Outcome School)</author>
    <category>math</category><category>llm</category><category>ai</category><category>machine-learning</category>
  </item>

  <item>
    <guid>https://outcomeschool.com/blog/scaling-dot-product-attention</guid>
    <title>Math behind √dₖ Scaling Factor in Attention</title>
    <link>https://outcomeschool.com/blog/scaling-dot-product-attention</link>
    <description>In this blog, we will learn about why we scale the dot product attention by √dₖ in the Transformer architecture with a step-by-step numeric example.</description>
    <pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate>
    <author>teamoutcomeschool@gmail.com (Outcome School)</author>
    <category>math</category><category>llm</category><category>ai</category><category>machine-learning</category>
  </item>

  <item>
    <guid>https://outcomeschool.com/blog/math-behind-attention-qkv</guid>
    <title>Math behind Attention - Q, K, and V</title>
    <link>https://outcomeschool.com/blog/math-behind-attention-qkv</link>
    <description>In this blog, we will learn about the math behind Attention - Query(Q), Key(K), and Value(V) with a step-by-step numeric example.</description>
    <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
    <author>teamoutcomeschool@gmail.com (Outcome School)</author>
    <category>math</category><category>llm</category><category>ai</category><category>machine-learning</category>
  </item>

    </channel>
  </rss>
