Microsoft favors Anthropic over OpenAI for Visual Studio Code
Microsoft favors Anthropic over OpenAI for Visual Studio Code by Tom Warren
It’s a tacit admission from Microsoft that the software maker is favoring Anthropic’s AI models over OpenAI’s latest GPT-5 models for coding and development. Sources familiar with Microsoft’s developer plans tell me that the company has been instructing its own developers to use Claude Sonnet 4 in recent months.
I have been in this boat myself. I was using Claude (Pro) this past month.
I am mostly happy with it. I was about to buy yearly subscription for it, there are same savings to it.
But then, OpenAI updated codex with a new release optimised for coding. And so I thought, before pulling the trigger and subscribing for a year let me give ChatGPT a go as well for a month.
Also, I am missing cursor agents. And codex does have agents. That might prove to be useful.
Emotional Agents
I read should AI flatter us, fix us, or just inform us, the crux of which was that agents like ChatGPT etc. should behave like machines, and it should be clear to us, the humans, that they are machines.
Emotional Agents by Kevin Kelly says it's a matter of when rather than if. That emotional agents would be a selling point of these agents. Not just uni-directional emotions, the machines will learn to read our emotions too, and act accordingly. It will be a relationship after all.
Emotions in machines will not arrive overnight. The emotions will gradually accumulate, so we have time to steer them. They begin with politeness, civility, niceness. They praise and flatter us, easily, maybe too easily. The central concern is not whether our connection with machines will be close and intimate (they will), nor whether these relationships are real (they are), nor whether they will preclude human relationships (they won’t), but rather who does your emotional agent work for? Who owns it? What is it being optimized for? Can you trust it to not manipulate you? These are the questions that will dominate the next decade.
Should AI flatter us, fix us, or just inform us
Should AI flatter us, fix us, or just inform us? by James O'Donnell
Should ChatGPT flatter us, at the risk of fueling delusions that can spiral out of hand? Or fix us, which requires us to believe AI can be a therapist despite the evidence to the contrary? Or should it inform us with cold, to-the-point responses that may leave users bored and less likely to stay engaged?
OpenAI launches cheaper ChatGPT Go subscription in India for 399 a month
What is ChatGPT Go? | OpenAI Help Center by
ChatGPT Go is a new, low-cost subscription plan that provides expanded access to ChatGPT’s most popular features at an affordable price.
OpenAI launches GPT5
OpenAI launched its new GPT-5 series models yesterday.
The main thing is (as Sam Altman had foreshadowed) some time back that there is no model picker. GPT decides what model to use based on a bunch of factors.
Simon Wilson has a nice write up about the model here. I personally have just started using it. I think I prefer Claude, personally, but your mileage may vary.
And now for a little story. Copilot was one of the first products that I was using - mainly because it had generous free tier limits. But I got frustrated with it soon enough. It just did not give me good enough answers, and I had no way to select or know what model was giving the answer.
So now you know how I feel about them removing the model picker.
There have been two sets of reviews I have read about ChatGPT.
The first set really like it. Like this review by Ethan Mollick
I asked GPT-5 Thinking (I trust the less powerful GPT-5 models much less) “generate 10 startup ideas for a former business school entrepreneurship professor to launch, pick the best according to some rubric, figure out what I need to do to win, do it.” I got the business idea I asked for. I also got a whole bunch of things I did not: drafts of landing pages and LinkedIn copy and simple financials and a lot more. I am a professor who has taught entrepreneurship (and been an entrepreneur) and I can say confidently that, while not perfect, this was a high-quality start that would have taken a team of MBAs a couple hours to work through. From one prompt.
The other is that this begins the enshittification of consumer AI chat products.
The noise on Reddit and elsewhere was so loud that ChatGPT had to bring back 4o as an option because people missed it.
For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back
Study mode in ChatGPT
Today we’re introducing study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.
I tried it, asking it to teach me about typography.
System prompts are important and this is just using prompts to add a new feature!
The real demon inside ChatGPT
Reporters from Atlantic had made ChatGPT tell them about blood rituals recently. People continue to mis-identify what these tools are. But that’s not their fault. These tools are just so good at being authoritative.
This post in particular talks about the contexts of the data they were trained on, and how removed from those contexts, they may mean more or less horrific than what they meant in the original context.
It was a refreshing perspective, a new perspective.
But ChatGPT and similar programs weren’t just trained on the internet—they were trained on specific pieces of information presented in specific contexts. AI companies have been accused of trying to downplay this reality to avoid copyright lawsuits and promote the utility of their products, but traces of the original sources are often still lurking just beneath the surface. When the setting and backdrop are removed, however, the same language can appear more sinister than originally intended.