7 Comments

The same point could be made when the internet arrived. No longer can only the insightful journalists and wise scholars speak, but any dude with a keyboard.

Then, we had Twitter. Everything had to be condensed in a couple of hundred characters. The vibe ruled.

Now, it is AI. Cheap mindless trivia.

The system is, however, self-correcting. We are awash in content. Way, way past the point of saturation.

But good stuff bubbles up to the top. Always did. Always will.

Expand full comment

Thanks for reading and for sharing your perspective. There is definitely still good stuff to be found but these days I find myself stumbling upon it by accident or in random ways. It's definitely no longer via Google. These days it often feels like you need to start on page 3 of the SERP to get anything good out of Google.

Expand full comment

Very provocative and thoughtful. I'm not sure that my tween child is consuming a lot of AI content outside of cute animal videos on TikTok, so it's hard for me to visualize what the threat is. Their computer-based schoolwork is through long-running programs like iReady.

Expand full comment

Thanks for reading and sharing your thoughts. From my perspective, I think about it from the angle of understanding human psychology and how to word things in specific ways to steer thoughts processes towards desired outcomes.

I'm talking about the kind of information you find in books like "Influence: The Psychology of Persuasion, Revised Edition" by Robert Cialdini or "Emotional Trigger Words" by Tony Flores. So my concern is what happens when AI is put in charge of writing and delivering school curriculums and has all of this knowledge to be able to write in a way that triggers certain outcomes. Who decides what those outcomes should be? Will there be fail-safes?

It's this idea that we are increasingly offloading our own brain processes onto these algorithms, who are then turning around and producing content that then influences us back. I feel like as it gets more sophisticated that it will (potentially) make decisions that benefit it, at the expense of us.

Expand full comment

I hear you, and I hope we're a long way from AI writing curriculum. The many highly educated people who create and implement curriculums (curricula?) would howl bloody murder at a robot taking over their jobs, and parents would lose their minds too. That said, I guess there's a way for AI to sneak into the process somewhere, like in the tweaking of a textbook chapter to reflect updated methods or knowledge. Those chapters will always and forever be edited and reviewed by humans, however. Who would probably end up having to rewrite those chapters anyway.

Expand full comment

At least for now, the AIs teach human culture

Expand full comment

I see your point. They've been fed data created by humans so the output is going to be some version of human culture. My concern is more long-term.

I talked about this a bit in a previous post I wrote last year --- https://bizarrodevs.wpshout.com/p/this-is-how-ai-will-overtake-humanity --- in the section titled "The Problem of Generational Desensitization," where I wrote:

Each successive generation is in closer proximity to newer technology than the generation before it. Therefore each successive generation is gradually desensitized to newer iterations of similar technology, that are in general proximity to the existing technology of the time.

The idea is that as we continue to hand over more of our thinking to these algorithms and as they get continually better at reasoning and their autonomy increases, that at some point future generations are barely going to think for themselves. I don't think it's unreasonable to see a future where "AI advisors" are helping humans in high places make decisions that then have extreme ripple effects onto all of humanity. But what happens if those advisors intentionally give bad advice? Or rather, advice that's bad for us, but good for the machines?

It might sound silly now, but I'm thinking generations ahead.

I don't have the answer, but I do think about it.

Expand full comment