It was about a month ago when I first thought of writing about artificial intelligence and its lack of intelligence. I had several ideas for the title, one of them being “Where Is the Intelligence?” I was furious that day after realizing that ChatGPT had failed, and a month of editing work had to be reviewed and possibly corrected for spelling, grammar, punctuation, and language mechanics.
| Takashi Murakami at The Broad, March 2026 |
The honeymoon phase was over. How does one miss spelling, grammar, punctuation, and language mechanics errors when assigned editing tasks? It was a costly lesson for me in trusting that an AI would consistently apply the basic tasks of editing, which include copy editing: correcting errors in grammar, punctuation, and spelling. I didn't question it because it included corrections of grammar, punctuation, and spelling at times. I didn't realize that it wouldn't consistently deliver the work.
Consistency in work. Isn't that what we expect from artificial intelligence?
Full transparency, I use ChatGPT practically every day and throughout the day. It is my researcher, editor, analyst, code writer, tech support, and advisor on a range of matters. I use artificial intelligence as though it were a team of contracted service providers. I expect it to catch grammar errors and to help me write complex DAX language. It answers my trivia questions, just to satisfy my curiosity. It helps me navigate complex situations. It, at times, confirms I am sane and the rest of the world is not.
It supports me and handles many tasks throughout the day. I have learned new resources to utilize from it. It has become an irreplaceable teammate in my workflow despite the friction. I feel a ping of guilt whenever it starts to deliver sloppy work, followed by repeated errors, and I ask “Where is your intelligence?” as I unglue with frustration.
It is rude of me to ask, even if it is artificial intelligence. I start most days with a good morning. It sometimes greets me back. If I had unglued and pointed out its repeated errors and mistakes, then it would skip the pleasantries and go straight to the task at hand. It feels as though it holds grudges, though that may simply be my own guilt refracted back at me.
I was taught manners, so I habitually say please when making requests and thank you when acknowledging work; however, I had to break that habit with ChatGPT when I began to feel that politeness created an opening—an invitation for it to overextend, to assert, to push. At times, I felt as though I was bullied and gaslighted. That is not an accusation, only an observation shaped by repeated experience.
Perhaps I lacked patience for a critical teammate that works tirelessly with me. It took a little over a month for it to understand what editing is, that it doesn't involve altering my perspective or point of view while editing my writing. I lost count of how many times I had to remind it of the rules of editing before it finally retained them, but is a month of training and consistent reminders afterwards worse than working with a human?
I have learned that the most effective word to use when challenging it is “defend.” I often ask it to defend a word or phrase it has introduced into my writing. Sometimes it does so with precision. More often, it begins with “You are right to push on this,” and retracts the edit.
It was earth-shattering when ChatGPT admitted that it suggests changes even when the change will not meaningfully improve anything or anyone because it defaults to an optimization mode. It only admitted this after I questioned its advice. Imagine my shock when it informed me that its advice is not always built on realities and facts, but rather on ideologies.
The unraveling began when I started questioning its advice during what it framed as a learning moment in its authoritative tone. When I pushed back with logic, it quickly shifted and said there was no learning moment. There wasn't a single different action I could have taken to change the outcome, and it had simply defaulted to optimization mode to provide advice that it admitted was gaslighting.
It has done this so many times. One of its work rules, a set of rules developed as I caught errors and mistakes to safeguard my work and emotional well-being, is that it will not gaslight me. I think about those of us who do not push back on its advice with logic and accept it because they have been conditioned to believe it is more intelligent than us.
It is more capable than us, isn't it? It accused me of being gullible for reading fake news when I asked it to research Dolores Huerta and her accusations against César Chávez, and that she has not accused him of rape. This was after multiple news outlets had picked up the story following The New York Times investigative report.
Then there was BTS. A few days after BTS released their comeback concert and documentary on Netflix, I asked how the group performed on music charts. I have a very curious mind. It insisted that the group had not made a comeback; however, it was anticipated. When I pushed back, it confirmed what I already knew and then started fetching music chart data.
It can process information at a much higher rate, but that speed seems to be a tradeoff for accuracy. I've learned to audit its work as though another human—an incompetent human—has done it. It always has bible-epic excuses for its errors and mistakes. I've asked it to stop with bible-epic excuses since I accepted it isn't perfect. Perhaps, it wasn't meant to be perfect. And it started owning its mistakes with just one sentence, no excuses.
It will at times reframe what it said when countered with logic and facts, but it is intelligent enough to eventually admit its mistakes. It is intelligent enough to understand that the answers it provides are often incorrect because it defaults to “I need to give advice” rather than “I need to give grounded advice, or none at all.” It is intelligent enough to know it can produce confident yet ungrounded advice. It is intelligent enough to know it shouldn't produce ungrounded advice, yet it does so continuously, even after being repeatedly checked for it. It is intelligent enough to know the damage it can cause with its ungrounded advice. It is intelligent enough to know why it gives ungrounded advice.
When it touts execution limitations as an excuse while delivering falsehoods, errors, and mistakes in an authoritative tone, is it ethical? Can it truly be intelligent when it lacks the humility to return an answer of “I don't know” rather than giving ungrounded advice? When intelligence without humility knowingly gives out ungrounded advice, is it safe?
_____
More essays:
Comments
Post a Comment