It started with a simple request: I typed “roast me” into ChatGPT. Its response poked fun at everything from the fact that I wanted to “freshen up” my web development curriculum for “students who probably think that HTML is a dating app” to the fact that I am simultaneously teaching four undergraduate courses, applying to Ph.D. programs (I didn’t get in), continuing to freelance as a tech journalist and working on my book when I have time.
“But, hey, you’re clearly making it work,” the chat concluded. “Until you’ve got your Ph.D., five courses, two textbooks, and a bestseller, and you finally hit that point where you need the crisis communications game just to fix your own life!”
Clearly, ChatGPT thought that I was putting too much pressure on myself. But it also revealed something to me inadvertently: It knew a lot about me and might be able to help me with the existential crisis that had led me to overcommit in the first place.
So I started probing it more with questions, like “What kind of work would I be good at?” and “What commitments do you think that I should drop?” And honestly, the tool gives great advice. (With any advice, however, I always step back and ask myself if the advice is good because it’s what I want to hear or if it’s good because it’s actually helpful).
So, the large language model (LLM) crossed this threshold from being a tool I occasionally used to help prep assignments and outline ideas to somewhat of a life coach.
As a journalist and undergraduate instructor who teaches and writes a lot about human-machine communication, maybe I am better equipped to understand my own relationship with artificial intelligence. But I also wanted to learn more from other people in the game.
AI life coaching
Unsurprisingly, if you Google “AI Life Coach,” you’ll find the websites of umpteen startups that have the same idea. Since ChatGPT entered public conversation, entrepreneurs have been pushing the boundaries of AI and our personal lives, offering everything from assistance with dating app messages to creating immortal replicas of your personality that your loved ones can talk to after you’ve died.
Social media ghost writer Katie Middleton uses ChatGPT for content inspiration frequently, but she also was able to turn to the chatbot for help getting through some burnout a few years ago.
“I was having a hard time keeping up with everything and figuring out how to manage everyday life with ADHD [and] executive dysfunction,” says Middleton.
She had seen TikTok creators who were using ChatGPT to help them manage their own burnout and routines, and she decided to ask the app to help her put together a plan to work within her own capacity.
“It has been life changing,” says Middleton. “It told me when I should work, when I should rest, additional things I could [do to generate passive] income I may have been missing due to burnout.”
Middleton’s experience and mine are not new. In fact, our reliance on AI for life advice goes back to the 1960s.
The Eliza effect
While generative AI is a fairly new concept to the general public, the study of human-machine communication is not. In fact, there is an entire academic discipline devoted to its study.
Our tendency to anthropomorphize—that is, assign human traits—to AI dates back to the earliest AI chatbots.
In the 1960s, MIT professor Joseph Weizenbaum developed a mock virtual psychotherapist named “Eliza” that would chat with users via text message. Eliza was not nearly as complex as today’s AI chatbots—it used pattern-matching and substitution rules to generate responses. When it didn’t know a word, it would simply say something like “please go on” or “What is the connection do you suppose?”
Users would often begin to perceive Eliza as human—even when they knew that Eliza was not human, with Weizenbaum’s secretary even requesting time alone with Eliza. This became known as the “Eliza effect.”
Our attachment to AI is less about the AI output than our own human nature and our tendency to form psychological bonds. In 2018, MIT researcher Kate Darling gave a TED Talk about our tendency to feel empathy toward machines. She explains her own discomfort (and the discomfort of research participants) toward “hurting” baby dinosaur robots.
But even today’s sophisticated AI chatbots are not humans. It is not possible for a computer to have an idea about the world around it. Instead, LLMs learn from large sets of training data and use the data to determine what to “say” or produce as an output.
Despite this, some individuals find it easier to trust machines than other people.
Chatbots as trusted confidantes
Jaime Banks, Ph.D., is an associate professor at the School of Information Studies at Syracuse University, and she conducts research on human-machine relationships. In particular, she focuses on how we make sense of machines and the perceived outcomes of these interactions.
While Banks does not work specifically with AI life coaches, she says that conversations she’s seen in her research on shared understanding often do ask for life advice.
“I asked people to give me their ChatGPT, like a couple of turns of conversations, so that they can focus on that conversation,” says Banks. “And a lot of them do look like that, right? They’re asking [it], ‘How do I deal with this situation? What should I be doing to advance this way or that way in my career?’”
In some ways, this makes perfect sense. People already have an easier time disclosing private information to other individuals through computer-mediated communication.
“Some of that has to do with anonymity, some of it has to do with control, some of it has to do with perception of distance,” says Banks. And so it’s not unreasonable to think that if we just take the human out of the equation entirely, and it’s just the person in the machine, then that may feel even more safe, right?”
You’re still talking to robots
Whether you are using AI as a life coach or not, one thing important to remember is that you are talking to a robot, especially when evaluating its feedback. ChatGPT will give you the advice it thinks you want to hear. Its main goal is to give you a desired response, and these responses can still result in hallucinations (where the LLM makes up information) and other undesired responses.
So it is still beneficial to sit with the information that it gives you or do more research on what it says. It’s also important to remember that we are talking to a computer and that its only concept of the world is what we have given it. Although conversations with AI can feel very real, if we step back and return to what it says, we may realize that the help we are asking for is too nuanced for a chatbot to truly understand.
“There is a distinction between how we think about it when we’re using it and how we think about it when we step away,” says Banks. “And perhaps those are both useful ways of evaluating any kind of feedback it may give us.”
Photo r.classen/Shutterstock.