The net good or bad of AI
Balancing the perceived benefits against the harms
What is the net result of IA? Is it good or bad?
I am thinking about it in the short term of course. No body knows what we can do with it in the long term.
There are three ideas here:
- We can't skip the intermediate step. If we want to get to the end state of AGI (or anything else), we can't skip the intermediate step. The intermediate steps are not great. LLMs consume a lot of compute, which is causing even more pollution. But without these worse steps we can't get to a better state.
- What are LLMs good for? LLMs in their current state are built up on stolen things, and for what? For a fancy auto-complete/grammar-check/code-complete? Now, because we have the hype-cycle going on, everything needs AI. Everything. Whether it is making the product or the experience better does not really matter. Universities are struggling with how to deal with AI generated papers. Of course, they are also using AI systems to deal with this.
- Creating problems and then solving them. The use of these LLMs and AI systems are creating problems which were not their earlier. So of course, we need to have new AI systems to deal with the problems created by these AI systems. Which is like a self-perpetuating cycle. Youtube will come out with a system to detect and remove creator ai likeness.
Of course, OpenAI and their ilk want all of us to think AI is a platform shift. They will get to make a bunch of money out of it. But it needs to be viewed against the cost. Are the perceived benefits enough. Is humanity so far gone that only an artificial god can save us?