Recent Posts
- Exploring “Linear” in Linear RegressionLinear regression is one of those things you learn early, use forever, and never quite slow down to inspect. So here’s a slow inspection —… Read more: Exploring “Linear” in Linear Regression
- The curious case of R-Squared: Keep GuessingMost explanations of R-squared start with a formula: Then they say something like “the proportion of variance explained by the model” and move on. And… Read more: The curious case of R-Squared: Keep Guessing
- [C1] What Machines Actually Do (And What They Don’t)Every time you use Google Maps at 5:30 PM, something remarkable happens — and it has nothing to do with intelligence. The app doesn’t “know”… Read more: [C1] What Machines Actually Do (And What They Don’t)
- [ML x] Machine Decision: From One Tree to a ForestEvery time a bank approves or denies a loan in milliseconds, every time Netflix decides what to recommend next, every time a fraud detection system… Read more: [ML x] Machine Decision: From One Tree to a Forest
- [MI 3] Seq2Seq Models: Basics behind LLMsWhen you use Google Translate to turn a complex English sentence into Spanish, or when you ask Gemini to summarize a long email, the computer… Read more: [MI 3] Seq2Seq Models: Basics behind LLMs
- [MU 1] Advertising in the Age of AIWhen you search for a product today, ads quietly shape what you notice. When you scroll Instagram, ads compete for slices of your attention. For… Read more: [MU 1] Advertising in the Age of AI
- [MI 1] An Intuitive Guide to CNNs and RNNsWhen your phone recognizes “Hey Siri,” a CNN is probably listening. When Google Translate converts your sentence into French, an RNN (or its descendants) is… Read more: [MI 1] An Intuitive Guide to CNNs and RNNs
- How Smart Vector Search WorksIn the ever-evolving world, the art of forging genuine connections remains timeless. Whether it’s with colleagues, clients, or partners, establishing a genuine rapport paves the way for collaborative success.
- [PET 1.c] Privacy Enhancing Technologies (PETs) — Part 3Privacy-Preserving Computation and Measurement In Part 1, we covered how organizations protect data internally — minimization, anonymization, query controls, and differential privacy. In Part 2,… Read more: [PET 1.c] Privacy Enhancing Technologies (PETs) — Part 3
- [PET 1.b] Privacy Enhancing Technologies (PETs) — Part 2Secure Collaboration Without Sharing Raw Data In Part 1, we covered how individual organizations protect data internally — minimization, anonymization, query controls, and differential privacy.… Read more: [PET 1.b] Privacy Enhancing Technologies (PETs) — Part 2
- [PET 1.a] Privacy Enhancing Technologies (PETs) — Part 1How Your Data Gets Protected Every time you browse a website, click an ad, or make a purchase, data flows through dozens of systems. Companies… Read more: [PET 1.a] Privacy Enhancing Technologies (PETs) — Part 1
- [PET 1] Privacy Enhancing Technologies – IntroductionEvery time you browse a website, click an ad, make a purchase, or train an ML model, data flows through systems. Companies need this data… Read more: [PET 1] Privacy Enhancing Technologies – Introduction
- [EN 1.b] Breaking the “Unbreakable” Encryption – 2In Part 1, we covered the “Safe” (Symmetric) and the “Mailbox” (Asymmetric). The TL;DR: we use high-speed symmetric safes to store our data, but we… Read more: [EN 1.b] Breaking the “Unbreakable” Encryption – 2
- [EN 1.a] Breaking the “Unbreakable” Encryption – 1If you’ve spent any time in tech, you’ve heard of AES, RSA, and Diffie-Hellman. We treat them like digital duct tape—they just work, they keep… Read more: [EN 1.a] Breaking the “Unbreakable” Encryption – 1
- [ML 2.c] Needle in the Haystack: Embedding Training and Context RotYou’ve probably experienced this: you paste a 50-page document into ChatGPT or Claude, ask a specific question about something buried on page 37, and the… Read more: [ML 2.c] Needle in the Haystack: Embedding Training and Context Rot
- [ML 2.b] Measuring Meaning: Cosine SimilarityIn the previous posts, we established that embeddings turn everything into points in space and that Word2Vec showed how to learn those points from context.… Read more: [ML 2.b] Measuring Meaning: Cosine Similarity
- [ML 2.a] Word2Vec: Start of Dense EmbeddingsWhen you type a search query into Google or ask Spotify to find “chill acoustic covers,” the system doesn’t just look for those exact letters.… Read more: [ML 2.a] Word2Vec: Start of Dense Embeddings
- [ML 2] Making Sense Of EmbeddingsWhen you search on Amazon for “running shoes,” the system doesn’t just look for those exact words – it also shows you “jogging sneakers,” “athletic… Read more: [ML 2] Making Sense Of Embeddings
- [ML 1.b] Teaching AI Models: Gradient DescentIn the last post, we established the big idea: machine learning is about finding patterns from data instead of writing rules by hand. But we… Read more: [ML 1.b] Teaching AI Models: Gradient Descent
- [MI 2] How CNNs Actually WorkIn the ever-evolving world, the art of forging genuine connections remains timeless. Whether it’s with colleagues, clients, or partners, establishing a genuine rapport paves the way for collaborative success.
- [ML 1.a] ML Foundations – Linear Combinations to Logistic RegressionEvery machine learning model — from simple house price predictors to neural networks with billions of parameters — starts with the same fundamental building block:… Read more: [ML 1.a] ML Foundations – Linear Combinations to Logistic Regression
- [ML 1] AI Paradigm Shift: From Rules to PatternsEvery piece of software you’ve ever shipped works the same way. A developer thinks through the logic and writes explicit rules — if the user… Read more: [ML 1] AI Paradigm Shift: From Rules to Patterns