A week of AI news

Last week, was another big week for AI. It feels like every week is a big week for AI news.

OpenAI announced operator, its agentic system that can do things on the web. Which spurred a bunch of existential dread and many many articles.

DeepSek’s models led to drop in valuations of a bunch of tech companies

I read a bunch of these posts and was going to write about it, but then I wrote about messy households instead.


My thoughts on LLMs have not changed that much in the past months. There is no moat. Hence this week’s loss in valuations makes sense. These technologies will become more efficient, more commonplace and more useful. Companies that have the data already or control over the devices users use will have an advantage. But in the end, for a common person, these will be commodities.

There was a post from Seth about Trusting AI this week that talked about this a bit. About the things you can trust an AI with and not. I believe we will get used to it as the technology reaches its zenith.


For developers and people like me, who work in IT, the situation will be slightly different.

Depending on who you ask, there are two visions of how things will end up being.

The doomsday scenario is this: AI will eat everyone’s lunch. Nobody needs developers. Lay every one off. Ask ChatGPT to build you a startup.

The best case scenario is this: AI will make us super productive and we will build things we could not earlier because now we will have more time on our hands. I have seen many posts comparing AI to a junior dev. It gets you 70% of the way there, then you need to do the remaining 30 %.

From Ignore the Grifters - AI Isn't Going to Kill the Software Industry

Some developers think AI isn’t going to change much of anything and we should just sit tight and wait for it all to blow over. That view is just as short sided as the doomer side of the equation. Software development has always been a career where you are either learning new things or stagnating. AI doesn’t change the need to keep learning and evolving.

I don't where this cycle will end. LLMs are certainly good at some things: summarising things, coding things, providing the first draft of certain things, or a spring board of sorts.

But it is not, as of yet, good at a whole lot of other things. The CEOs of the world want that not to be true. But alas it is.


In all this, policy will play a big role too. The governments would need to agree on what is OK and what is not OK.

If the end goal is AI everywhere, who will the AIs work. Will the world be a bunch of AIs talking to each other? Maybe, that's what the corporations dream of.

But the amount of money these corporations have invested in this, and the amounts they have committed to spend on this, there has to be someone willing to pay for it. There has to be some use for it. Otherwise, as mentioned in The AI Bubble Is Bursting, they will start shoving AI down the throat of everyone, whether they want it or not.