7 Comments
User's avatar
⭠ Return to thread
Martin's avatar

I see your point. They've been fed data created by humans so the output is going to be some version of human culture. My concern is more long-term.

I talked about this a bit in a previous post I wrote last year --- https://bizarrodevs.wpshout.com/p/this-is-how-ai-will-overtake-humanity --- in the section titled "The Problem of Generational Desensitization," where I wrote:

Each successive generation is in closer proximity to newer technology than the generation before it. Therefore each successive generation is gradually desensitized to newer iterations of similar technology, that are in general proximity to the existing technology of the time.

The idea is that as we continue to hand over more of our thinking to these algorithms and as they get continually better at reasoning and their autonomy increases, that at some point future generations are barely going to think for themselves. I don't think it's unreasonable to see a future where "AI advisors" are helping humans in high places make decisions that then have extreme ripple effects onto all of humanity. But what happens if those advisors intentionally give bad advice? Or rather, advice that's bad for us, but good for the machines?

It might sound silly now, but I'm thinking generations ahead.

I don't have the answer, but I do think about it.

Expand full comment