Welcome to your weekly Brew & AI

Each week, I’ll share how to make sense of AI - no jargon, no hype, just simple insights you can actually use.

P.S. - To ensure you receive this newsletter, please move it to the primary tab and set it for all future communications. Or just add this emailid as a known contact. I wish AI could solve this for me.

Grab your coffee - let’s dive in. 👇

☕ AI in the news

There’s just never a week without any “hot” AI news, here we go again

MSFT 🔽 , GOOG 🔼 : Wall Street made it clear this week as to how it feels about Big Tech’s AI progress. Microsoft stock took a tumble after reports claimed it had to lower growth targets for its AI enterprise products after many sales teams missed their numbers. On the other hand, Google has surged since releasing Gemini 3 and is now the market’s one and only favorite.

“Code Red” for OpenAI - OpenAI has been the undisputed leader in the AI space since it started. After close to 3 years, it seems like someone’s on the verge of changing that.
This week OpenAI declared a “Code Red” after Google’s Gemini seems to be catching up, and sharply. In less than a month of Gemini 3’s release, it has almost caught up to ChatGPTs monthly app downloads, probably due to the massive ecosystem Google already has at it’s disposal.
What this means - OpenAI will now stop focusing on everything outside ChatGPT - browsers, ads, shopping etc, and purely focus on making the chat experience seamless for users. As consumers, we win either way.

☕ Guides

This week, I’m trying out something new. I’ve finally had some time to redo some parts of the newsletter, and work on material I’ve been planning for a while.
I’ll be sharing one page guides/cheat sheets with AI content. Save these and refer to them whenever you come across something you don’t quite understand.

For now these are newsletter exclusive. They’ll be up on the website soon.

The first one’s called the AI Vocabulary card, and it consists of 20 terms for everyone to know, paired with a simple coffee analogy.

☕ This week’s blog

This week’s blog covers an aspect of LLM’s which becomes critical the more you work with them - “context windows”.
Think of it as the short term memory the models have to work with. They don’t remember every single thing you tell them, and they might not even remember what you told them 15 minutes back. Here’s why that happens and how do you work around it.

Hope you like this one - do leave me a like once you’re done reading it, helps me better understand what concepts stick and how to plan ahead

💛 P.S.

That’s it for this week’s brew.

I’d love to hear what you think - what you liked, what could be better, or what you’d love to see next.

Just hit reply - I read every message over my morning coffee ☕.

Brew & AI
Making AI simple, one sip at a time

Keep Reading

No posts found