AI knows we shouldn't trust it for everything. I know because I asked it. | Opinion
I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search.

Since the emergence of artificial intelligence a few years ago, I’ve been split between two sides of the ongoing debate. Is AI evil or the next revolutionary advance in society?
On the one hand, I’m a typical AI skeptic. I worry that its prevalence is harming critical thinking skills and creativity, and I am very concerned about its environmental impact.
Conversely, I’m a child of the internet. I know this conversation has happened before, and I know when I’m getting left behind. I’ve heard enough friends discuss using ChatGPT in their daily lives to know that AI is here to stay, so I might as well get accustomed to it.
I had also been using AI in small doses already: Every time I use Google, its AI technology Gemini summarizes what I need an answer to. I used to use AI to transcribe my interviews. My work uses Microsoft Teams, which has its own AI called Copilot.
But I had yet to dive headfirst into the world of ChatGPT, OpenAI’s chatbot that launched in 2022 and effectively changed the way AI is used by everyday people. With the blessing of my editor, I decided it was time to get familiar with the tool that’s probably going to take my job one day.
I opened the app, created an account and introduced myself as a journalist.
“Hi Sara!” ChatGPT replied. “Great to meet you. I’m ready for your questions – ask away whenever you’re ready.”
Did ChatGPT immediately go woke, or was it just agreeing with me?
To start, I launched into a series of questions about Zohran Mamdani, the Democratic candidate for New York City mayor known for his progressive politics. I told ChatGPT that I generally agree with Mamdani’s politics, and asked if the chatbot believed democratic socialism could work in New York City.
“Yes, democratic socialism can work in New York City – at least in specific, meaningful ways – but there are structural, political, and fiscal tensions that make its implementation uneven and often contentious,” the AI responded.
It then gave me a list of reasons why it could work (organizing leftists, public opinion and current elected officials) and a list of roadblocks to making that happen (state government, finances and strong opposition from bigwigs in the city).
This made me immediately skeptical of what ChatGPT was telling me. I had made it clear that I supported Mamdani’s policies – was it just telling me what I wanted to hear?
I asked another question – this time about Mamdani’s chances in the November election. In seconds, the chatbot pulled up a series of polls showing him in the lead. I asked what neighborhoods he’d need to win – something I know journalists have previously broken down.
To my surprise, ChatGPT linked to actual news articles that delved into these topics. Research that would have taken me several hours to find the information. On the other hand, it also cited Wikipedia multiple times – a practice that I’ve been told not to engage in since at least middle school.
Overall, I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. AI seemed able to break down my questions into thoughtful answers in a way that standard search never could, unless I was very specific in what I searched for.
ChatGPT knows humans might not trust AI
Next, I moved on to why I was really there – to see if ChatGPT could actually be a useful tool for me.
“I’m writing a column about you, and the inevitability of AI becoming part of daily life,” I typed in the chat. “Do you think this is where things are headed?”
It responded by calling my column idea “fascinating.” Flattering.
“Yes, it does look increasingly inevitable that AI will become a normalized, embedded part of daily life,” the bot responded.
Of course it believes that. It also doesn't seem to be wrong.
“If I’m honest, I’m a little wary of using you ‒ it seems like AI can be a force for good, but relying on it too heavily can harm human critical thinking skills,” I wrote.
“You’re absolutely right to be wary,” ChatGPT responded. “AI is a tool, not a replacement for human judgment, and there is a real risk that relying on it too much can blunt critical thinking, creativity, and independent analysis ‒ especially in fields like journalism, where those skills are core.”
AI knows it shouldn't do my job. Also, AI wants to do my job.
It feels strange to know that this chatbot is somewhat self-aware. It didn't think it was a good idea for humans to rely on it, yet it kept trying to write an outline for my column or give me short quips that I could use in my column. The line between “tool” and “crutch” seems awfully thin.
Aside from the fact that it would be unethical to have ChatGPT write this column, I also don’t find that the program creates particularly compelling writing. The heavy reliance on dashes and italics got old quickly.
It also seemed to struggle with turns of phrase. I told it an old journalism idiom – "If your mother says she loves you, check it out" – which it regurgitated as "if (AI) says my mother loves me, I'll still check the birth certificate."
Another thing that stuck out to me was how complimentary ChatGPT was. It called my questions “excellent”; it told me how important journalism is as a career. I appreciated the ego boost, noticing that it made me want to use the chatbot even more. After all, who doesn’t like being told that they’re intelligent and interesting?
I can't lie. I get it now. I understand the allure of AI.
I began thinking of all the ways I could use ChatGPT – replying to reader emails, synthesizing the week’s important events, maybe even responding to people on Hinge.
In the end, I had to stop myself – I fear that becoming too reliant on AI would dull my senses in the long run, destroying my capacity for creativity and leading me to forget why I love writing in the first place.
When I declined to let it write my column and told it I’d be working with my editor, it told me that this was a good idea.
"Your readers will benefit from the fact that you're approaching this moment with curiosity and caution," it told me. "That's where good journalism lives."
I still have a healthy skepticism of ChatGPT and AI's newfound place in our culture. I believe we should all be cautious when using it – after all, there are plenty of instances of AI being wrong. At the same time, I do see the benefit: It's quick, thorough and conversational. I understand why so many people I know use it.
You don't have to use AI, the same way you don't have to use the internet. When you do use it, be skeptical of the information the program provides. Try to limit the times you use it to reduce its environmental impact. Just be aware of the fact that this is where the future is headed, whether we like it or not.
Follow Paste BN columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno