Scholars have begun to demonstrate that technology is having generation-shaping effects, not merely in the way it influences cultural outlook, behavior and privacy, but also in the way it can shape personality among those brought up on social media.1
– Nora McDonald, Assistant Professor of Information Technology, George Mason University
I’ve often referred to my generation, the Xennials (those born between 1977-83), as being the last to really know what it’s like to grow up without being immersed in tech. Most scholars who study generational trends would agree and describe Xennials as having an “analog childhood but a digital young adulthood.” This perspective – as the last generation to know what life was like before smartphones, social media, and most definitely AI – has a huge influence on my opinion of AI generated content.
I’m certainly not against the overall use of AI. I use it almost daily myself. However, I have developed a strong aversion to AI writing. When I refer to AI writing, I don’t mean using ChatGPT to write a one-liner generic description of something that doesn’t require any original thought or input.
What I’m talking about is the type of shenanigans we’ve seen where companies have fired their writers and replaced their work output with entire articles written solely by AI. 2 3 I don’t just have an ethical issue with it in the sense of supporting workers’ rights, but the problem is that the articles are, for the most part, terrible.
They’re not bad because AI wrote them. They’re bad because they’re bad. 4
Most of us recognize this already, which is why there was this Twitter trend in 2023 where people in various overlapping industries were joking that “everybody wants to produce AI content, but nobody wants to read AI content.”
But for me it goes beyond quality. With the rate at which we’re going, in a few years, the quality is likely to continue improving so in the long-term, it (probably) won’t even be a valid problem.
So then what is the problem?
There are several. I don’t claim to have the “right” answers to these issues, but I do have a lot of thoughts and questions that I think we should all be asking ourselves. Below is my contribution to this conversation.
AI is the new gwai lo
There’s a scene in the 1988 martial-arts action movie classic Bloodsport, where Frank Dux, played by Jean Claude Van Damme, is talking to Senzo Tanaka, played by Roy Chiao. In that scene, Dux is trying to convince Tanaka to continue training him after the death of Tanaka’s son. Tanaka is hesitant to the idea. As he initially explains to Dux:
For 2,000 years, knowledge passed from father to son, father to son…when [my son] died, it stopped.
Dux passionately responds by telling Tanaka to teach him and that he can do it, to which Roy Chiao’s character firmly says:
You are not Japanese. You are not a Tanaka.
The message is clear: we don’t pass our knowledge on to those who are not in our group.
There’s a similar theme of not trusting outsiders with insider knowledge in the Bruce Lee biopic, Dragon: the Bruce Lee Story.
In the movie, which is loosely based on the legendary martial artist’s life, a young Bruce starts teaching martial arts to non-Chinese in California. The local Chinese elders have a problem with it, and at one point in the film one of them instructs Bruce to stop teaching the “gwai lo” (foreigners).
Of course Bruce ignores them, as he did in real life, but were the elders wrong? Or was Bruce wrong?
I’ve asked myself this question in the past – well before the AI era that we’re in now – and ultimately I came to the conclusion that neither side was right or wrong. Both positions had their own pros and cons, and each had their own consequences – though, of course, we only saw the consequences of the road that was taken.
The AI connection
What’s the analogy here?
The analogy is that all the people who currently work with generative AI, large language models, small language models, etc, are all basically in the same position that Bruce Lee was in:
They are the knowledge gatekeepers.
And similar to Mr. Lee, they’ve decided to trust “the outsiders,” except in this situation, the outsiders aren’t human. They are insatiable algorithms consuming every morsel of human knowledge being fed to them.
We’ve already witnessed the initial effects of this firsthand:
Tons of new AI apps
Existing tools fusing AI into their workflows and features
A general AI gold rush with new startups popping up left and right
An internet flooded with AI generated trash that no humans are actually reading
All this and we’re not even two full years in.
To be clear, I’m not suggesting all of it is bad. As I said in the beginning, I use AI tools myself almost daily. But it’s hard to not be bothered by the AI-polluted search results I see on a daily basis, and to not wonder what the long game is here.
Your worldview, shaped and formed by AI
There’s another additional layer to this that we can pull from Tanaka’s “father to son, father to son” line.
That line is simple enough but it underscores an important point about transfer of knowledge and the lineage of our own knowledge.
I’ve read countless books over the course of my lifetime and I have no doubt in my mind that my worldview and the way I generally think about a lot of things has been shaped by the collective sum of all the human knowledge I’ve learned from reading the books that I’ve read.
This has been the human experience, going back thousands of years, all the way to the Epic of Gilgamesh and likely even earlier. Even in societies that didn’t develop writing systems, they shared their knowledge through oral tradition, or through dance, combat systems, and other forms of kinesthetic knowledge.
But all of that has suddenly changed. Literally in less than two years.
We now have – as I’m typing these words – young kids who’s minds are being shaped and formed by AI. Writer Erik Hoel wrote about it a few months ago on his Substack, and he was as equally disturbed / concerned about it as I am:
We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care.
For two thousand and twenty-three years, knowledge pass from father/mother to son/daughter, father/mother to son/daughter…and now, as of last year, from AI to son/daughter.
But what happens when AI starts writing more complex text that’s not just teaching toddlers their ABCs, but is teaching college students how to understand the world and make decisions?
What happens when worldviews are being shaped and formed by algorithms rather than our own species? My gut feeling says that there’s no way this is going to benefit humanity in the long run. Regardless though, whether it’s good or bad, it will ultimately alter our trajectory as a species.
What do you think? Are you worried at all about the long-term impact of AI’s influence on the future of humanity and our planet? Share your thoughts in the comments.
References
https://theconversation.com/teens-see-social-media-algorithms-as-accurate-reflections-of-themselves-study-finds-226302
https://www.theverge.com/2023/1/25/23571082/cnet-ai-written-stories-errors-corrections-red-ventures
https://futurism.com/sports-illustrated-ai-generated-writers
https://www.cnn.com/interactive/2023/07/business/detect-ai-text-human-writing/
The same point could be made when the internet arrived. No longer can only the insightful journalists and wise scholars speak, but any dude with a keyboard.
Then, we had Twitter. Everything had to be condensed in a couple of hundred characters. The vibe ruled.
Now, it is AI. Cheap mindless trivia.
The system is, however, self-correcting. We are awash in content. Way, way past the point of saturation.
But good stuff bubbles up to the top. Always did. Always will.
Very provocative and thoughtful. I'm not sure that my tween child is consuming a lot of AI content outside of cute animal videos on TikTok, so it's hard for me to visualize what the threat is. Their computer-based schoolwork is through long-running programs like iReady.