Skip to main content

AI is still kind of pointless – and dangerous. Stop using it. | Opinion


AI is the problem. Continuing to find ways to use it can't be the solution.

play
Show Caption

This column has been updated with new information.

Late last year, I started writing a column on generative artificial intelligence. I interviewed a few folks and got down a few hundred words before I was overtaken by the events of a hectic election year, during which "gotta finish that AI column" was always at the back of my mind, even if I never got around to it.

Here's the punch line: I had been worried about what the future of AI might hold. But while I wasn't paying attention, the future arrived.

And AI won.

I suppose I had assumed that most reasonable adults would view AI as skeptically as I did. Its shortcomings are well-documented, and haven't we all seen "The Terminator"?

But when a friend posted to social media asking other professionals how they use AI, the answers were effusive: To organize notes, to generate ideas, to write press releases.

When a group of college students visited the Free Press, I was stunned to hear that a particular class is required to use AI – their prestigious university has its own "safe" AI that protects students by not feeding their data into a larger learning model.

A nauseating commercial for a big tech firm's phones shows lonely people chattering away to the company's shiny new AI as though it were a long-lost friend.

When an acquaintance reported working alongside students at a coffee shop seemingly pasting material from generative AI directly into assignments, the ensuing online conversation revolved around ethical use of AI ‒ not how to shut down behavior it seems like most folks would consider inherently dishonest.

The search bars on Facebook and Instagram have been replaced by AI-assisted search, which confuses the crap out of me. Maybe I'm old, but it all just seems unnecessary.

I may have misjudged my fellow humans. But I'm not wrong about AI. We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems. It's a bad idea. And – because I enjoy shouting into the void – we really ought to stop.

ChatGPT, Gemini are basically text prediction machines

The field of artificial intelligence research started after World War II as a reaction to the philosophical and psychological school of behaviorism that reduced humans to, well, a set of behaviors, explained Robin Zebrowski, an old college friend, a professor of cognitive science at Beloit College and an internationally recognized expert in the cognition of artificial intelligence, who publishes papers with titles like "Carving Up Participation: Sense-Making and Sociomorphing for Artificial Minds" and "The AI Wars, 1950–2000, and Their Consequences."

In the 1950 paper "Computing Machinery and Intelligence," the British mathematician Alan Turing (the man who broke Germany's World War II Enigma code, and was later criminally prosecuted by his own government for being gay) posed a question: Can a machine think in a human enough way to pass as a human?

Scientists of that era became fascinated by the why and how of consciousness, and whether it was possible to build a machine that could pass what had become known as the Turing Test.

"I’m very sympathetic to the mad science of it, but part of it is really like, 'Why can’t we do this? Let’s try it!' which morally is a little dicey," Zebrowski said. "But there was no danger in playing with these systems, because they weren't very good at what they did, and what they did was not what we do."  

("Consciousness is so weird," she added. "We have no idea how it emerges. We're not putting that into a machine anytime soon.")

AI still doesn't do what we do, but these days, it looks a lot more like it.

The new generative AI models like ChatGPT or Gemini don't exist because of a big leap in technology – they're essentially text prediction machines, and the neural net technology that powers them has been been around since the 1980s, Zebrowski said. But data – our data – has become a commodity.

"We have the ability to suck up everyone’s data and run it into the grinder," Zebrowski said. "What changed is the availability of data at scale."

'AI has no concept of truth'

With more data, text prediction machines like Gemini or ChatGPT have an infinitely larger set of responses to offer – even if some of that data is bad. The output of an AI system will always reflect a learning model trained on misinformation or biased information, Zebrowski said, and "the current AI models were trained on the dregs of the internet."

In 2022, the news outlet CNET quietly started using AI to write some stories. Within months, it had to end the practice, issuing corrections on half the stories produced by AI, according to The Verge.

Just this month, a handful of media outlets issued corrections or retractions after a slew of stories, apparently researched using generative AI, falsely claimed that President George H.W. Bush had pardoned his son Neil.

And those are just AI's most high-profile, publicly embarrassing flops. Now imagine that at scale, across all of those meeting notes and panel questions and college assignments.

These outcomes, Zebrowski said, should surprise no one.

"That’s the nature of how the system works," she said. "It will always be filled with inaccuracies, because AI has no concept of truth."

Is it ever ethical to use AI?

As far as "The Terminator" goes, Zebrowski assured me, that's still pretty unlikely. But that doesn't mean AI, and the way we use it, isn't dangerous.

"Right now, AI is mostly hype. But it’s doing a lot of damage – and it's not doing a lot of good yet," she said.

But this is how humans do tech: Damn the torpedoes, full speed ahead. And that's how we're doing AI – using it for panel questions and meeting notes and research papers, for medical diagnosis, for homework assignments, to crunch data and replace workers – absent any reasonable regulatory or universally recognized ethical guidelines.

The general consensus seems to be that the horse has left the barn, and that any effort to coax it back into its stall is futile. Instead, the argument goes, we've got to learn to ride the horse or we'll get trampled.

That college professor who visited the Free Press said she's teaching her students ethical use of AI. But that's part of the problem: Can AI be used ethically?

It's a question the big tech companies might not be interested in answering.

Ethicists had rung a lot of alarms about AI – "had," in some cases, being the operative word.

Google forced out its lead AI ethicist in 2020. Microsoft laid off its AI ethics and society team back in 2023. OpenAI's ethics team was disbanded just last year.

The U.S. government has failed to adopt any meaningful regulation or guidelines for AI development or use, compared witht the European Union, and at least one person (that'd be Elon Musk) who is heavily invested in the proliferation of AI wields outsize influence in the incoming presidential administration.

AI ethicists have laid out very clear guidelines, Zebrowski said, calling for things like government regulation of these systems.

"But that's unlikely to happen, in part because if you look at who the government is calling in to give them advice about this, it's the CEOs and the people who benefit from AI, so that's going to go poorly," she said. "There's lots of people in particularly marginalized communities who have pointed out specific dangers and specific harms, and they're not getting the audience with the president. So the regulation that we need is very unlikely to show up, at least in the U.S., and that's where most of the companies are headquartered.

"My guess is, it's going to get a lot worse before it gets better, if it gets better."

Even OpenAI's CEO, Sam Altman, is worried

Here's the other thing about AI: It's a product that Big Tech companies have spent big money to develop – even if it doesn't actually do what they're saying, or at least implying, it does.

"You hear Sam Altman, the CEO of OpenAI, say openly at a government hearing that he's genuinely worried about what these systems will do, like he didn't release it into the world," Zebrowski said. "It kills me that the public hears this hype that makes it sound like AI is actually going to be super intelligence, and it's going to be smarter and quicker and more powerful than humans, when the companies benefitting from the hype are making the models.

"There's a reason for the hype, and it’s mostly money. And that means the public doesn't actually know how these systems work, and what we should or should not use them for."

Hence that helpful commercial depicting AI making thoughtful gift suggestions, or regaling its owner with stories about nature. Tech companies want us to believe that AI – like smartphones and social media and Bluetooth clothes dryers and refrigerators that tell us when we're out of milk – is essential to our lives.

And tech companies have a great track record at this kind of market bombardment.

The New York Times Magazine's Willy Staley detailed the tech marketing model in a recent piece about Netflix, explaining how the company charged through years of debt to emerge an essential household service: "If you applied the logic of the media business to Netflix, it looked uncertain, but Netflix was operating by tech-sector rules – spending boatloads of cash to acquire customers, changing their habits and overwhelming competitors until, at the other end, an entire industry was transformed."

What are the main risks with AI?

Before Google parted ways with its ethicist, Timnit Gebru, she and other researchers produced a draft paper outlining the chief harms risked by AI. In addition to the inscrutability of AI learning models and the certainty that it will incorporate misinformation, AI systems are trained on data sets generated by people with the most access to technology, creating a dataset biased toward more affluent, tech-friendly people and nations. (Gebru is the researcher who identified problems with the way facial recognition software identifies women and people of color.)

Gebru and her team also flagged the environmental consequences of AI – training a learning model sucks down an enormous amount of energy, resulting in an explosion of emissions – costs likely to be born by marginalized communities for the benefit of more prosperous organizations or individuals.

A data center just outside Memphis that will provide computing power for Musk's xAI houses 18 or more portable methane gas generations, installed without permits and which residents say emit a constant stream of hazy but visible smoke.

The historically Black community is no stranger to environmental injustice, but AI presents a new front in this fight: One researcher estimates that a single query to a chatbot uses enough energy to power a lightbulb for 20 minutes.

ChatGPT is learning how to be me – and probably you, too

Last year, I asked ChatGPT to write a column "in the style of Michigan journalist Nancy Kaffer," and it kind of delivered.

"In the heart of the Great Lakes State, where opportunity should flow freely like our pristine waters, there exists a deep chasm of inequality that threatens the very existence of our society. The battleground for this struggle is none other than our education system – a system that is supposed to uplift, empower and prepare our future leaders."

It goes on in similar fashion – "as we stand at the crossroads of progress, it is imperative that we address this issue head-on" – "together, let us forge a path towards a truly equitable education system."

I do write about equity in education. But this is laughably bad: Perky and surface, laden with clichés and grand diction, offering little substance and the tone of a well-informed high school sophomore.

When I made the same request last month, the result was strikingly different.

"Michigan's infrastructure is in a quiet crisis," ChatGPT wrote. "It's not a flashy issue that makes the front pages of newspapers or trends on Twitter, but it impacts the daily lives of millions. The cracks in our roads, the flaking paint on bridges, the overworked water systems – they're all symptoms of a larger more systemic problem that requires urgent attention, not just for the sake of convenience, but for the state's future economic stability. ... There’s a growing call among policymakers to make bold investments in infrastructure, to stop kicking the can down the road. Yet we’re still caught in the tug-of-war between short-term political wins and long-term, necessary spending. Fixing Michigan’s infrastructure is not just a matter of putting a few more dollars into road repair – it’s about ensuring the state can compete, thrive, and support its citizens well into the future. For those of us living here, the frustration is palpable."

I can still tell it's not me. But it's a bundle of phrases I might use and ideas I might endorse, even if it's pasted together without much art or depth.

ChatGPT is learning how to be me, and it's getting better at it – and I'd be lying if I didn't admit that I'm a little worried about what it might deliver a year from now.

What is the future of AI and us?

I'm joking about "The Terminator." Mostly.

I love technology: I was an early adopter of the iPhone, and I like to remind my Gen Z son that modern Xboxes and PlayStations exist because in 1987, I used my babysitting money to buy the first Nintendo.

But when it comes to AI and machine learning, there's also "The Matrix," likewise set in a machine-dominated dystopian future, and "Bladerunner," "Rossum's Universal Robots" and "The Veldt," Isaac Asimov's Three Laws of Robotics. And I can tell you that on the point of this, and it not ending well, the literature is unanimous. It's fiction, sure, but look at the past century: We can only achieve what we can imagine, and science fiction helps us imagine what sort of future is possible.

Or the sort of future we really do not want.

That thing earlier, about the horse and the barn, is a convincing metaphor. Let's consider a knife, instead. You can spend a lot of effort rendering the knife harmless, wrapping it in tinfoil or duct tape or dulling the blade – or you could just put it down.

Rarely in history have humans voluntarily relinquished technology. One notable exception: the atomic bomb.

"We did it, we deployed it, most people regretted it, and that spurred decades of us trying to manage and mitigate the fallout from that technology," Zebrowski said. "Everyone was like, 'OK, let's not do this,' but of course, secretly, everyone's still doing it. There's not really a way to tell the whole world we can't pursue this technology. But I really think nuclear technology is the closest analogy I can think of. Because we did all finally agree, or at least mostly, most countries finally agreed, this was not something we should be doing."

There are a lot of ethicists out there who have suggested policy frameworks for the use of AI. For lawmakers who'd like to bring some rationality to this question, such guidelines are easy to find. (If you're a university, you probably have an ethicist or two on staff.)

Even so, when I consider the inherent bias and propensity for error embedded in AI learning sets, the potential for mass shutouts from the workforce and a generation that never got to learn critical thinking skills, because AI did it all for them, I keep coming back to that thing about the knife – and one thing more: You can't be part of the solution if you're part of the problem.

You don't have to use AI. And if you do, maybe you should stop.

Nancy Kaffer is the editorial page editor of the Detroit Free Press, where this column originally appeared.