Generative AI Facing Familiar Problems: Echoes from Web 2.0

Generative AI Facing Familiar Problems: Echoes from Web 2.0

Think of the 2022 craze for generative AI as a crazy bumpy ride. Now, though, it’s 2023, and fear has taken over. Even though it’s growing quickly, the problems that companies are having sound a lot like the problems that social platforms had in the past. Content moderation, shady business practices, and the spread of false information are all big problems. It’s like generative AI didn’t get the memo about how to stay safe in the Web 2.0 world.

Generative AI Rocky Ride Brought to Light

The generative AI scene has hit a speed bump in the little over a year since OpenAI released ChatGPT and set records for the fastest growth in consumer products. The same technology that hailed as a game-changer is now looked at closely by top government officials. The US Federal Elections Commission is looking into some shady campaign ads. Congress wants to know more about how AI companies handle training data. And the European Union passed the AI Act quickly, making changes at the last minute to deal with problems with generative AI.

Again: Old Issues, New Technology

That being said, generative AI is still having trouble with issues that social platforms have been dealing with for almost 20 years. It’s scary how much the problems OpenAI and other players are facing are the same problems that tripped up companies like Meta (formerly Facebook). Think about fighting fake news, dealing with questionable labor practices, and stopping the spread of content that you don’t want to see.

Going Back in Time: From Meta to OpenAI

Remember how hard it was for Meta (Facebook) in the Web 2.0 era? It looks like those problems are coming back. Leaders in generative AI are in a hurry to release new models, but they’re running into problems that social platforms couldn’t fully solve. It’s like going back in time, but with AI added to the mix.

Problems That We Can See Coming: Hany Farid’s View

Hany Farid, who teaches at the School of Information at UC Berkeley, doesn’t mince words. He says that the problems with generative AI “completely predictable problems” that could have avoided. They coming, like misleading campaign ads and the complicated world of AI development, and Farid says they could have stopped right away.

Where Generative AI Went Wrong: Its Journey Through Déjà Vu

Generate AI seems to be going back over the same steps that Web 2.0 took, but it doesn’t seem to have learned anything from those steps. Problems with content moderation, fair labor practices, and fighting false information didn’t just appear out of thin air. The echoes of a digital age that could have taught us important lessons but the message got lost somewhere.

How to Avoid Pitfalls: Can Generative AI Change the Rules?

Some people want to take action while generative AI deals with its own issues. There’s hope that the AI industry will step up and take strong steps to avoid the problems that lie ahead, having learned from Web 2.0’s mistakes. There is a chance to break out of the cycle of old problems and start a new era of responsible and thoughtful AI development.