The overwhelming sense I got from this week’s readings was a rising tide. Humans never got a chance to figure out how to handle one another online and in digital spaces. Now we’re getting AI that can produce in a day everything every human online has ever created.
This isn’t far from what we learned last week about beauty companies’ responsibility when marketing abroad and Vietnam’s responsibility while rapidly expanding its high-tech employment and production. To wit: where’s the responsibility? Where’s the brake? Oversight is never going to happen, so how are we going to teach responsibility?
Who is going to take responsibility for any of this?
Modern Communication: Now With Even More Noise
The internet is a marvelous thing: for the first time in human history, everyone is standing in the same room and can speak with just about anyone at just about any time. Unfortunately, we’re all yelling at the tops of our lungs simultaneously, and that makes hearing anyone damn near impossible. It looks like AI writers are making that even worse.
In radio theory, there’s signal — which is the actual message-carrying radio wave — and there’s noise — the turbulence, interference or static that interferes with the broadcast. When a broadcast gets modulated into radio, the de-modulation has to know what’s signal and what’s noise, and sometimes it’s really hard to tell the two apart, and that’s where choppy, fading or unclear radio comes in.
In modern communication the principle’s the same. The signal is worthwhile things to know. The noise is so-called “fake news,” crap advertisements, pop-ups, spam, poorly written emails, Facebook posts from bot accounts, Facebook posts from dumb friends — you get the idea. We’ve got news-writing bots and algorithms learning what’s prevalent on the Internet and just duplicating it.
Ironically, maybe the solution comes from different bots.
For Good or Ill: Bots Might Know Best
A good journalist challenges his source when he suspects duplicity. When gathering data about anything of note, a good journalist tries to detect falsehoods or accidental inaccuracies. Before running the story, the journalist should factcheck everything they’ve got, and make sure nothing fell through the cracks. The worst news and interviewers are people who don’t push back on their subjects and take everything they hear and think they know at face value.
Maybe robots are the way forward, then. They can’t take anything for granted (that they’re not programmed to). When a robot can “filter out 99 percent of false news stories,” maybe there’s hope.
[A researcher] cites the earthquakes that struck Kumamoto in southwestern Japan in April 2016. Soon after, an image circulated on social media of a lion that had reportedly escaped from a local zoo and was roaming the city. But [machine-learning AI news gathering service] Fast Alert realized the image originated in South Africa.
This example struck me particularly because I spend a lot of time on Reddit, where reposts are frequent. The idea of a bot doing the job of Know Your Meme in a heartbeat is exciting and heartening, and could reduce the number of crap posts we see on Facebook with clearly re-used footage referencing it as if it happened the other day.
But that means it’s a race to the top — or bottom depending on your perspective.
The Arms Race Begins: Deepfake vs Deepfake Detectors
There was a time when a rock was really effective at killing someone. So people wised up: looking out for people with rocks.
Then throwing them started working. So people wised up: stay far enough away.
Then people made spears. So people made shields.
Then people made swords. So people made armor.
Then people got on top of horses and did the stuff you see at Medieval Times. So people made big stone houses and walls.
You get the idea. It all progresses to telephone pole-sized tungsten rods launched from Earth’s orbit causing more damage than hundreds of nukes (“Rods from God” — I’m not joking). My point is that humans are good at spiraling out of control when the thing they’re building is countered by a one-up technology.
It looks like we’re headed that way with deepfake videos.
Researchers publishing their ability to detect deepfake raises a challenging question: how secretive should the process of chasing down deepfake be? National security intelligence gathering is secretive not because they’re doing shady stuff (mostly), but because once a person knows how you know what you know, they stop doing the thing that let you find out.
If I tell everyone I know a secret of yours because I listened in while you called your mom from your back patio and I was hiding in your trash cans, you’ll stop calling her from your back patio and move your trash cans.
Similarly, how should we tackle deepfake? Do we fund a team to clandestinely surveil the internet for such videos and grant them authority to take them down? Or do we insist they regularly publish the research and technology they’re using to identify such videos, thus granting the ne’er-do-well’s greater ability next time?
But Don’t Worry: Buy Your Cares Away
Meanwhile, the implications of AI, machine learning, megalithic corporations with detailed accounts of every decision you’ve ever made online, and a globally interconnected data infrastructure and economy are being put to good use: figuring out to sell you more stuff.
Humans are predictable beasts, and no one knows that better than companies that sell things online. Voice and voice-recognition are an obvious next step to selling us things and getting us further plugged-in to our Internet of Things. The more we speak with, anthropomorphize and associate with our objects, the more in-tune we’ll feel with them. And the more likely we’ll be to trust their purchasing recommendations — but also the less likely we’ll be to get rid of the devices themselves.
Humans are social creatures, and language is very important to us. If we can speak with our technology, we’ll feel even more connected to it than we already do. And speaking is easy and thoughtless, which is exactly where every major seller of goods wants to be: omnipresent and uninvasive. If we can buy something by simply musing aloud about how nice it would be to have it, you know Amazon is desperate to get to that point.
Some people talk about oversight, and some people talk about limits. But let’s be realistic: the genie’s out of the bottle. Responsibility is what we need, and no one’s talking about that. The future will be about coping with these changes and adapting to them, not about living happily with the wise and responsible choices we made today.