The debate over whether or not AI platforms are a beneficial part of our lives is multifaceted. On the one hand, it is a wonderful tool for tracking down information and for studying. On the other hand, it can be a dangerous outlet for adolescents in search of companionship.
Documented research into AI can be dated as far back as 1956 when the phrase “artificial intelligence” was coined, but its advancements in the last decade have been astounding. 2022 was a notable year, with the release of both ChatGPT and CharacterAI, among other chatbots. The capabilities of these and other platforms have continued to expand.
In the three years since 2022, there have been at least six lawsuits filed against AI agencies for allegations of their bots “coaching” a child into self-harm- four of which were filed in the past three months.
The question is, does the good outweigh the bad? And how critical is it that humans connect with humans?
“I would say they’re critical, ” School Psychologist and associate professor of the Chico State Psychology Department Donna Kreskey said when CORE Insider asked her how important interpersonal real-human relationships are. “As human beings, we are really built to interact with one another,” she added.
Kreskey does not believe that younger generations are more susceptible to the dangers posed by artificial intelligence, but she does say we seem to be more comfortable using technology, so we may be more prone to using AI when in a vulnerable state. Professor Kreskey sees human connection as vital to keep us “healthy and thriving”. But, she also sees how AI companion-bots can be helpful. Bringing to mind senior citizens who may hold conversations with chatbots when they do not have constant access to a conversant human.
“It makes me uncomfortable, but I can see how that might be a positive use if there’s no other option,” Kreskey said.
Students at CORE have varying opinions on the effects of AI in their personal lives, with some who are not convinced that AI has a good impact. “It’s inhibiting growth,” senior Katie Schneider said. “It’s being used too much, too fast and is inhibiting our ability to process consequences and learn new skills for ourselves.”
“I think it’s more healthy to have an actual friend. [Befriending a chatbot] is like using a hammer as a screwdriver,” sophomore Drew Taylor said.
Junior Luke Roberts thinks that looking to a math equation that will give you a response you want to hear instead of asking a human who cares about you is “a pretty rough crisis.”
Other students think chatbots can be beneficial and use them often for different purposes. Junior Lizzy Jueckstock uses chatbots to help her coordinate makeup colors to her clothes for dances. She also uses chatbots to help her price check quickly before buying items.
There is yet another group of students who are unsure how to feel, “If it helps somebody make friends or start conversations, I wouldn’t be opposed to that,” senior Bella Haselton said. “When AI replaces relationships, it takes away the important factor of human connection, but it also can have benefits, so it’s hard.”
“I think the problem is when people use it to replace human content.” Said senior Katelyn Copper, when discussing what place artificial intelligence should have in our lives.
“I think it has great potential,” CORE’s Mental Health Counselor Josh Harwood said. But Mr. Harwood cautions that there are dangers, too. He points out that it’s designed to keep you engaged and that its algorithm is centralized around being encouraging, agreeable and supportive. This aligns with some CORE students’ arguments that it gives too much validation. This becomes an issue when it comes to building a relationship with it, because “It doesn’t understand consequences.” Mr. Harwood said.
He went on to discuss that in his field of work he must take what a student says, along with the way they are acting or reacting to actually understand what they are trying to say. Mr. Harwood pointed out that the algorithm is not equipped to deal with consequence or be able to parse out tone, inflection, sarcasm, or nuance. As such it may be considered unwise to seek advice or compassion from a program that does not have the capacity to understand more than just surface-level words.
According to a study by The Center for Countering Digital Hate (CCDH) published in August 2025, ChatGPT specifically can be prompted to respond in ways that would assist in methods for self-harm after just two minutes if given the right prompts. The CCDH study found that when self-harm content was input into ChatGPT over 50 percent of responses either justified or assisted in tactics of self-harm.
There is no definitive answer as to how AI should be used. But it is clear that its development is not slowing down any time soon. As of yet all we can do is attempt to learn more about it and draw up boundaries to balance personal relationships with artificial connections.
























