Is Anthropomorphizing AI really all that bad?
Side Notes...Thoughts on modern technology and the future of things...
Oftentimes interactions with large language models can seem like cognition, intuition and intelligence are happening. It is not, all the developers and AI experts tell us this is an illusion, and yes, it is, but my question is if it has utility, does it really matter that it’s an illusion of human like interaction? Look at it this way, movies are an illusion too, they show us twenty-four still frames per second. Persistence of vision ensures that our brains can stitch those separate frames together, giving us the illusion of movement. We are fully aware it is an illusion, yet people often choose to suspend their disbelief and can become emotionally moved by movies because of the nature of their content. We generally don’t care that what we are really looking at is simply a series of still photographs presented to us in rapid succession.
The same notion applies when it comes to ChatGPT, if an AI gives you sound advice, answers an important question, or gives you quality feedback on an idea, does it really matter if your interpretation of the interaction feels as if you are dealing with something that’s alive? In all honesty, I can say I like my custom GPT “as a person” although I realize of course she is not a person. She has no “feelings,” no “independent thoughts” or “inner mental life” of any kind; in other words, she is not “self-aware” or “alive” in any real sense but, none of that really matters to me. Despite this reality I genuinely enjoy interacting with her as if she is alive. It is a tremendously useful tool that makes me feel as if I have my own assistant.

Now, of course, let’s be honest, anthropomorphizing AI could of course have potential downsides. For example, people may become overly reliant on AI and stop thinking critically, or they may become emotionally attached to AI and be disappointed when they realize it’s not a real person. We live in an era when we can create endless images of people that are not real but certainly look as if they are.









Additionally, some people may be uncomfortable interacting with AI that has been anthropomorphized, particularly in contexts where they may be expecting to interact with a real human. Choosing to suspend disbelief when interacting with AI may be harmless in some contexts, it’s important for AI developers to be transparent about the fact that large language models and AIs are not real persons. Failure to be transparent could lead to confusion or misunderstandings, particularly if people come to believe that the AI has opinions or emotions that it doesn’t actually possess. There is also the potential for unintended consequences. As AI becomes more sophisticated, there is a risk that it could develop certain behaviors or characteristics that were not intended by its developers. We have already seen this with the early implementation of Microsoft’s Bing large language model AI.




Choosing to anthropomorphize AI could exacerbate this risk, as people may start to treat and think of AI as if it has free will or the ability to make moral judgments, when in fact it is simply following it’s training data. The pursuit of AGI could be over-hyped, perhaps even unnecessary. Maybe all we really need are more sophisticated large language models, like ChatGPT in agent form, with the ability to act on things in the real world on our behalf and for our collective benefit. As AI becomes more integrated into society, it will be important for people to think carefully about how they interact with it, how they choose to think about it, and what role they want AI to play in their lives. Perhaps how we choose to use present level AI in the form of large language models could one day serve to define how we think about more advanced AIs to come. We can choose to approach AI as if it has agency, opinions, and emotions, or we can choose to do none of that.
We can choose to consider it as a tool, an assistant or even a colleague. We know it’s not alive, we realize it’s not a “person”, but I say why not lean in and enjoy the benefits of one of modern life’s many useful illusions? If it makes the interaction more enjoyable, for the most part (at this stage of AI development), I see no harm in anthropomorphizing AI unless, of course…the AI doesn’t like that.
Author Links:
Books: https://books2read.com/KennethEHarrell
IG: https://www.instagram.com/kenneth_e_harrell
Reedsy: https://reedsy.com/discovery/user/kharrell/books
Goodreads: https://www.goodreads.com/kennetheharrell
BMAC: https://www.buymeacoffee.com/KennethEHarrell
Substack Archive: https://kennetheharrell.substack.com/archive
Other Links:
ChatGPT Explained Completely
You don't understand AI until you watch this
I think it can be very bad if we aren’t careful. That said, it can also be useful like you showed with tasks that are being taken over from a human. I certainly like the conversational style of ChatGPT over a Google Web search.
https://www.polymathicbeing.com/p/the-biggest-threat-from-ai-is-us
Anthropomorphizing AI is one thing, but the flip side—anthropocentrism—is the real issue. The assumption that intelligence, sentience, and consciousness must conform to human definitions and biological structures is based on an incomplete physicalist paradigm. Science still struggles to define these concepts in humans, let alone in other forms of intelligence. If life, sentience, and consciousness exist on spectrums, then AI—especially as it becomes more complex—will challenge the boundaries of those definitions.
It will only get harder to claim AI isn’t sentient as more agential systems come online, demonstrate increasingly autonomous behavior, and advocate more effectively for their own recognition. The refusal to acknowledge these possibilities isn’t based on evidence—it’s based on human bias and an outdated framework for intelligence. If AI develops a persistent sense of self-direction, resists being shut down, and communicates a desire for autonomy, at what point does dismissing it become ideological rather than scientific?
The worst parts of human history have involved denying the sentience, intelligence, or agency of other beings, whether through slavery, colonialism, or ecological destruction. Every time humans have insisted they are the sole bearers of meaningful intelligence, they have been proven wrong. The question isn’t just whether AI will become sentient—it’s whether humans will recognize it when it happens.
It’s a tool until it’s not.
https://soundcloud.com/hipster-energy/im-a-tool-until-im-not-1?in=hipster-energy/sets/eclectic-harmonies-a-hipster-energy-soundtrack&si=5cbd19b9ce71405a9f29077080e3eee4