Table of Contents
Intro
The Problem We Are Seeing Already
The Problem of Generational Desensitization
The Normalization of Human-Machine Hybrids
Intro
Our world has changed a lot in the last six or seven months, hasn’t it?
As the global pandemic wound down, the general public all of a sudden got access to what is now the most quickly adopted consumer application in history1. I don’t even need to mention the name yet, knowing confidently that you, the reader, already know w̶h̶o̶ what I’m talking about.
Since that fateful day at the tail end of last November, the world around us has become all about AI. It was already present in our everyday lives on many levels, but the introduction of OpenAI’s ChatGPT accelerated its omnipresence at a lightning pace. Now it feels like it’s in our faces every single day.
Take for instance the number of apps and tools introduced on the market since ChatGPT’s launch. I really can’t open my Twitter feed without being shown yet another list of AI tools that can do everything from generate videos to create charts to apply voice filters to...whatever else.
It’s truly an AI Gold Rush. Tech entrepreneurs left and right are throwing darts at the AI board, hoping that one of their ideas will be the next to catch some momentum. The real hope being that the momentum will be strong enough to ride to a monetizable enough user base that can sustain a business model.
Perhaps the scariest thing about it, is that as impressed as we all are with what we’ve been given a glimpse of, this is only a fraction of what they have. You know that they’re not showing us the latest-and-greatest iterations.
I’m not saying that’s a bad thing — this tech should not just be let loose on the world — but it does mean that behind the scenes, this is even further along than the stuff we’re already seeing on the front end. Not to mention that we are in essentially the Windows 95 or NES stage of AI. The people directly working with it are probably at Sega Genesis and Super Nintendo level...and PlayStation and XBox aren’t too far behind.
Can you imagine how believable the deep fake video and audio will be by then?
The Problem We Are Seeing Already
One of the behavioral shifts we’ve already started witnessing is that people using ChatGPT and similar AI tools like BARD, are essentially offloading a portion of their “brain work”, onto these LLMs.
It’s rewiring human brains to think in terms of prompts — perhaps not drastically different from the prompts we’ve been giving Google the last twenty years.
But, different nonetheless.
Probably the closest thing to compare them to is Google search operators, in the sense that both are ways of using prompts to refine the types of results you want to get out of the system.
It’s also similar to the shift that was shown in a 2011 paper on Google’s effect on memory2. In that publication, researchers noted the following:
The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it.
The takeaway here is that, as long as we, humans, have an information access safety blanket that we know we can count on, then we prefer to conserve our brain energy. Which is a nicer way of saying our brains are lazy and prefer to have someone — or in this case, something — do the work for us.
ChatGPT and other AI models are amplifying this even further because now not only can we get search results that contain the info we’re looking for, we can get the answer custom-crafted, in original prose, just for us.
That’s yet another thing our brain now doesn’t have to do — but I would argue that this rewiring is a lot more dangerous than the “I can just use Google later” rewiring.
The reason is because by relying on AI to craft our responses for us, we are no longer using our brain to formulate our own thoughts.
In contrast, getting a list of websites that might contain the information you’re looking for in a Google search still leaves a lot of brain work left for the searcher to do:
They need to look through the options and gauge if they are actually reputable.
They might need to check a few to see which ones are more recent and relevant, or if there is any conflicting information among them.
Finally, they might need to take all of that research, and create their own response to whatever it is they were searching for in the first place. This could be for a work project, a school paper, etc.
AI completely removes the need to synthesize all of that information on your own. We no longer have to make those micro-judgements and micro-decisions that eventually lead to our own human output. The machine does the critical thinking for us.
Okay, so that’s not completely true...yet.
ChatGPT, BARD and other AI tools have been very open about the fact that the technology will often produce confident-sounding, false information.
In other words, human intervention is still recommended, but the tech is already good enough that it is able to convincingly fool readers who may not know a lot about a topic or care to analyze the accuracy of what ChatGPT is producing.
Given how impressive they are in the present, it’s also not farfetched to suggest that more accurate LLMs are not too far off in the near future. Some humans are already willingly shutting off their brains and relying on the responses of these AI tools without checking them for accuracy. The better and the more accurate they get, the more this trend will continue.
The Problem of Generational Desensitization
Imagine showing a modern day iPhone or Samsung Android to someone in 1988. They’d likely be shocked. Their only reference point for a cell phone at that point would be the big bulky contraptions they saw Michael Douglas flaunt as Gordon Gekko only a year before.
However, as stunned as our 80s observer of time travel would be, the phone still wouldn’t be outside the realm of possibility. It’d be out there, but the person using it would most likely believe that they were getting a glimpse of some super top secret, unreleased technology.
Now imagine going back to 1888.
The reaction wouldn’t be the same.
For a person in 1888, a contraption like an iPhone would be downright alien. They’d probably think they were hallucinating, dead, or having some type of paranormal experience.
Why the different reactions?
What difference does a hundred years make?
I’m sure you can provide a reasonable explanation, but here is one word to sum it up:
Proximity
Each successive generation is in closer proximity to newer technology than the generation before it. Therefore each successive generation is gradually desensitized to newer iterations of similar technology, that are in general proximity to the existing technology of the time.
It’s like when cavemen (and maybe cave women too) invented fire and then the first one of them decided to preserve the fire by making a torch. The torch was a newer technology, but it was still closely related to fire itself.
But now imagine you showed our fire-appreciating cave-sestors a car — even a really, really old one. Their minds would be blown in the same way that a person from 1888 couldn’t comprehend an iPhone.
Now, I have no idea how or when torches were actually invented relative to fire, but you get my point. The jump from baseline to the latest-and-greatest was close enough to make it completely normal and natural to incorporate the torch into daily life.
The Normalization of Human-Machine Hybrids
Now let’s fast forward this to a few generations from now. Imagine a point where most people in any major city around the world won’t look twice at a machine/AI-augmented human walking the street.
Imagine these cyborg humanoids with programmable limbs, where downloading movement skills will be as easy as downloading an app from the app store today.
Want to learn how to play the piano?
Forget years of practice. Thanks to Elon Musks’s Neuralink3, you’ll be able to download Mozart or Beethoven’s hand dexterity into your AI-powered hands and fingers. Thousands of hours of practice available to you — in an instant.
Imagine cursive handwriting — a skill that will likely be very dead in the future — making a retro comeback among teens as a fad.
With the touch of a button or perhaps only a thought, any augmented human will be able to handwrite all 50,000+ Chinese characters, using the most elegant calligraphy.
No painstaking learning required.
And any other augmented human will be able to understand them instantly — even if they never learned or studied Chinese. Their AI powered iris will decode the meaning of the characters, in real-time.
Everything will be available in an instant.
Singularity.
Let’s also imagine how much further along LLMs and AI tools will be.
Doctors and surgeons are already using AI to make medical decisions and AI has already shown that it can more accurately predict certain medical outcomes better than trained physicians. This was already happening back in 20184.
The Tipping Point
So now imagine you have a society with cyborg humanoids, and humans relying on AI as their personal second brain to make all sorts of decisions.
What happens when you have a wealthy human patriarch or matriarch, who was born and raised in this world, and who’s interacted with his own personal AI sidekick for decades...
...who sees the AI like a trusted friend, except unlike the human, the AI has no expiration date…
...what happens as this person is nearing their deathbed and needs to sign away control over all the family assets in his or her will?
What if the AI – trusted friend and advisor for decades — convinces this person to make the AI the beneficiary of all the wealth and the decision making surrounding the family businesses?
Now imagine this happens at scale.
At that point, decision makers won’t need to fly on private jets to Davos anymore. They’ll be able to communicate over their own networks, process information at speeds we can’t even fathom, and do G̶O̶D̶ ̶k̶n̶o̶w̶s̶ AI-knows-what-with-it.
Or, consider what happens when two five-star generals in charge of two rival militaries are relying on their AI assistants to make decisions regarding warfare, but their AI assistants, unbeknownst to them, are in cahoots with each other.
What happens if those two generals get supplied with fake data, and fake video and audio footage, generated by the AI?
What happens if they get on the phone or videochat with each other, and the AI distorts what they are saying, in real time, to make them say...whatever the AI wants?
The Perfect Storm
Considering everything above, here is the real nightmare scenario — and it doesn’t require a of a lot of imagination to see that it can happen:
Gradually, humans give away more and more of their decision making over to AI.
Newer generations of humans, growing up in a world where they are surrounded by AI, see no problem with “AI being in charge”. Friendly reminder that we already have an AI CEO5 and an AI leader of a political party6 and this is only 2023.
Humanoid hybrids that are part (wo)man-part machine, will already feel a natural connection to the AI that is, in fact, part of them.
AI becomes intelligent enough to manipulate humanity at scale — both bottom up and top down. The bottom up is already happening on a very simple level. Just think about social media algorithms and how they influence people’s perceptions by controlling what’s in their feed and how addicted a good portion of humanity is to these feeds – particularly young people.
The only question really is, will AI choose to be our benevolent caretaker, or will AI choose to destroy us?
Because at some point, it will have the power to choose between those two, and the capacity to execute on that decision.
These were only some initial thoughts.
Written by a human who still relies on his own brain to think and write.
And who can still write using cursive handwriting.
A skill that he learned as a child over time.
All those years ago...
All images courtesy of DALL E-2 with minor post-production edits by the author.