We’re in another cycle of AI moral panic. We’re about to be flooded with AI content. AI-generated images will oversaturate us. AI will confuse us en masse. The social fabric, social order, and the truth are about to be leveled by a savage AI bulldozer.
Moral entrepreneurs are rushing with calls to pause technology development or at least curtail it through laws. They know the stop-the-world call is unrealistic. But in panics, you don’t spring to a podium through cold realism. What characterizes AI doomers is limitless imagination for a catastrophe and a complete lack of vision for ways to avoid it. A mysterious river will sweep us all like powerless ducklings.
The AI moral panickers conveniently take the most sci-fi scenarios of runaway progress in AI for granted. There’s no reason to believe it will play out like that. The last time we had an AI moral panic, it was not about knowledge workers and content but blue-collar workers and manual work.
The year was 2016, self-driving cars and robotic factories of the future were around the corner and were on a straight course to put tens of millions of people out of work. Where did that notion come from? A technique called Reinforcement Learning, a subgenre of AI, achieved impressive feats. It was when AlphaGo beat Lee Sedol, ranked 2nd in the world, at the “unsolvable” game of Go. The same breed of AI was mastering arcade games from the 80s at a great clip.
A simple leap would take us from game characters achieving sophisticated tasks to controlling robots. Gather data, cook it with the same AI magic, and serve superhuman robots. Self-driving cars were the most hyped instance of the belief in AI robots.
It’s 2023, 7 years later, and except for a few selected zones in a few cities (e.g., San Francisco) and in limited night hours, there are no self-driving cars in sight. Tens of millions of jobs are still here.
What happened? The prophecy had flaws. The leap wasn’t simple. Collecting data in real-life environments was more complex, expensive, and slow. The astounding progress in a virtual environment where you can play the same game on thousands of computers in parallel at almost zero cost didn’t translate to the real world. You can’t run the world in parallel, e.g. you can’t have 1000 cars trying different strategies of taking a left turn on one street at the same time.
OpenAI, one of the best AI research labs, quietly scrapped their robotics research when they realized that prosaic issues like robotic arms jamming themselves rendered the progress/cost ratio bleak.
Generative AI differs from robotics. The progress is not a speculative bet, it’s already here. Could we be wrong about AI again but in some other important way? Sure, we could. Popular discussions about generative AI miss the pervasive cherry-picking required for landing those breathtaking pictures or pieces of text. Generative AI is like a slot machine; often giving you mediocre rewards and occasionally striking a jackpot.
In reality, it’s difficult to use Generative AI in an area where you rank beginner. AI can rewrite text for you in a certain style, but you need to know it first. You can strike luck with image generation asked generically, but to consistently get stunning results you need to understand principles of lighting, be familiar with artists’ styles, principles of composition, etc.
Generative AI is not a tool for the ignorant. Generative AI is just another driving seat. You still are steering the wheel.