Finally, a reliable Claude Code.
Many people moved from Claude Code to Codex last month as Claude's services were frequently going down, and people were hitting rate limits without completing any meaningful work. Finally, that might change.
Anthropic has signed a deal with SpaceX. Within a month, it will be able to use all of the computing capacity of the Colossus 1 data center. In case you’re wondering, it’s called Colossus because it’s a cluster of 220,000 NVIDIA GPUs.
Thanks to the SpaceX deal and other recent deals with compute providers, Anthropic has announced that it will double Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.

The most basic, free version of ChatGPT has gotten smarter and more reliable.
OpenAI announced GPT-5.5 Instant – its latest foundation model that hallucinates less than previous fast models. The company has decided to make GPT-5.5 the default ChatGPT model, and that’s good news for two reasons:
1) AI companies provide the baseline models (like Gemini 3 fast, Claude Sonnet 4.6, and now GPT-5.5) for free only when the inference cost is low. The more effort companies put into baseline models, the better it is for developers who want intelligent models for a lower cost. Big companies can pay high amounts, but smaller companies and developers still find SOTA models unaffordable.
Interestingly, the API cost of GPT-5.5 Instant is the same as that of GPT-5.5. The actual savings come from GPT-5.5 doing little to no reasoning, and hence using fewer tokens per task.
2) A large part of the world cannot pay a $20/month AI subscription fee. And not having access to ‘intelligence’ creates disparity. But smarter and lighter models can level the playing field. GPT-5.5 Instant is another step towards leveling the playing field.
Speaking of smart and light models, Google’s Gemma was in the news last week.
Google’s Gemma 4 AI models got ~3X faster.
Google calls its Gemma 4 series “byte for byte, the most capable open models.” So far, there is no reason to doubt Google’s claim. Last week, the series might also have become the fastest for its size.
LLMs produce one token at a time. But Google is trying to change that with a technique called Multi-Token Prediction (MTP) drafters. In this technique, a small draft model proposes multiple tokens that are validated by the main model. This improves inference speed without affecting the quality of the final output.
DeepSeek is raising funds for the first time.
It’s only been around two weeks since DeepSeek released the DeepSeek-V4 series, and the models have already become the favourite of programmers. I’ve seen dozens of posts praising the intelligence per dollar of the DeepSeek-V4 series, and none that criticize the models.
DeepSeek might soon raise its first venture capital round at a valuation of about $45 billion. There are two interesting things to notice here:
DeepSeek has never raised money. Right now, around 90% of the company is owned by Liang Wenfeng, the CEO and founder of a Chinese quantitative hedge fund.
Apart from the big 4-5 American AI labs, DeepSeek has impacted the world of LLMs the most. And yet, its valuation is only ~45B?
Elon Musk vs. Sam Altman: A window into the psyche of AI leaders.
You probably already know about the ongoing lawsuit between Elon Musk and Sam Altman. I don’t want to delve much into it, as the facts coming out of the proceedings are too juicy. Had Chai With AGI been a tech gossip newsletter, I would have happily given you more details.
However, there is one thing I want to mention: do read a few articles about the proceedings, read the chats and emails between tech leaders, and you will know how fallible many of these ‘leaders’ are. Certain excerpts have been shared from the diary of Greg Brockman, the president of OpenAI, that provide a clear window into the psyche of these hotshot executives.
Meta is using bone structure analysis to identify kids.
Instagram and Facebook are now using AI to analyze the bone structure of photos. If the AI detects that the user is under 13, it removes the account.
How interesting.
Children using internet services ought to be regulated by parents and guardians. But for some reason, Meta is doing that. I don’t want to bash Meta here, as there must be a certain number of accounts used by children under 13 that Meta had to develop a technology for this. I just find it interesting that a large number of parents are okay with their children under 13 having Facebook and Instagram accounts.
