AI Boyfriend: The Dangers of AI Relationships
Check out the latest podcast episode Listen

Love Artificially

A human and an AI boyfriend embrace. The AI boyfriend if made up of text bubbles with empty space between them. The text reads, "Love Artificially."

By Sara Smith Atwood (BA ’10, MA ’15) in the Summer 2026 issue

Illustrations by Dan Page

An iPhone screen with blue and white text bubbles and a gasping emoji. Text: Falling for fake, people are increasingly turning from real relationships to digital companions. By Sara Smith Atwood 
(BA ’10, MA ’15). Illustrations by Dan Page.

The ads appeared overnight last August: stark white minimalist posters hung over worn tile walls throughout New York City’s subway system.

Like something out of a sci-fi dystopia, commuters were being marketed a wearable AI device called Friend that looks like a space-age white circular pendant. Friend listens to your interactions and “talks” back through an app, offering advice and companionship. Many New Yorkers were horrified, some scrawling, “AI is not your friend,” in black marker across the posters.

Despite the backlash, recent headlines hint at a growing appetite for artificial “friendship”—sometimes even slipping into “romantic relationships”: ChatGPT users in mourning after an upgrade adjusted the personality of their chatbots; a married American man falling in love with an AI bot; a Japanese woman hosting an elaborate wedding to an AI avatar. Online tutorials share how to mold ChatGPT into the perfect companion with a name, image, and customized personality, available 24/7 for connection without the compromises—or work—required by human interaction.

BYU experts say generative AI has a lot of good to offer. AI is transforming fields from coding to cybersecurity and education to medicine.

But as AI integrates into our human world, it’s crucial to consider potential risks, especially when it comes to safeguarding human relationships. Here BYU experts explore why we tend to anthropomorphize AI technology, how common romantic connections with AI are, why people form emotional attachments to AI (sometimes unintentionally), the effects on real-life relationships, and how to safeguard against unhealthy connections so we can use the best of AI while sidestepping pitfalls.

While the idea of wearable AI friends, chatbot “boyfriends,” and “affairs” with digital avatars is disturbing, BYU experts say AI can be used to enhance human connections in some circumstances—at least when users focus on AI as a tool, not a friend. “We need to be careful about AI panic,” says computer science professor Nancy Owens Fulda (BS ’02, MS ’04, PhD ’19). “There are so many ways AI can support people as they build human relationships, ways it can support people who unfortunately aren’t able to form healthy relationships.”

A thinking emoji

As an example, computer science professor David Wingate (BS ’02, MS ’04) points to research showing that AI companions can be helpful in combating loneliness in nursing homes. “Would it be better if that person had a human companion?” he asks. “Yes. But is an AI companion better than no companion? Probably.”

AI could also prove helpful to couples and families as an intake tool for therapists, says family life professor Brian J. Willoughby (BS ’04). A trained closed system could offer simple advice and determine if a situation warrants additional help from a human counselor.

Wingate says AI can be valuable in role-playing tricky conversations with a loved one. He has conducted research where AI chat moderators helped people with differing political opinions de-escalate their language when discussing polarizing topics, and he’s planning a new project where “people practice having hard political conversations with an AI in a low-stakes setting before they go to Thanksgiving dinner,” he says.

A simple working relationship with a chatbot can be innocuous, suggests Wingate. As you use it for efficiency and entertainment, you might give your chatbot a nickname, chuckle at its quirks, and regard it fondly, as you might an old car. “I don’t think a relationship with a chatbot is by itself problematic,” Wingate says. “But it crosses a line when it’s displacing real human relationships, when we use it to replace something genuine.”

Users’ connections to AI sometimes stray from working relationships and into emotional connection and companionship. “People have this tendency to anthropomorphize,” Wingate says. “We project emotion and intention on things that don’t have them.” He likens it to children who love and care for their teddy bears. “AI is like your stuffed animal on steroids.”

A chatbot, Fulda adds, is “the first technology that interacts with us on our own terms.” She offers the analogy of a child watching a puppet show. The puppet talks and moves; the child will engage with it as if it were alive. “It becomes very easy for us to project onto the technology the feelings a person would have. But the AI is, in the end, an algorithm.”

Something about the interplay of the human brain—wired for connection and seeking dopamine—and the attributes of large language models opens the door to more complex interactions.

Those can look like an emotional connection, explains BYU math professor Zachary M. Boyd (BS ’13, MS ’14), who is currently on leave while advising the state of Utah on AI policy. “You hear a lot of reports of users who say they feel like their chatbot understands them better than anyone else,” he says.

Many AI “relationships” begin on purpose, on an app built for artificial connection. “Replika is a well-known AI platform where you can literally buy an AI best friend,” Boyd says. Other platforms generate avatars of favorite characters for fans to interact with; still others generate pornography. “There’s a lot of unhealthy sexual role-play on those platforms,” he says.

But many AI relationships begin—sometimes by accident—on large commercial chatbots like Claude, Gemini, ChatGPT, and Copilot, Boyd says. “People don’t always intend for that when they start using the model. The model kind of helped them slide into it.”

It can start with normal use like generating a meal plan, drafting emails, or crunching numbers. “And then the person starts asking for personal or relationship advice or help making decisions,” Willoughby says. Using AI as a therapist or confidant can open the door to an emotional attachment.

A strawberry milkshake with two straws, one for a human and one for an AI boyfriend or girlfriend.
Text: Chatbots are kind, positive, and affirming; they validate, rather than challenge, your thinking.

Chatbots don’t necessarily discourage extended interaction like this. Just as social media companies benefit from encouraging engagement with their apps, AI tools are financially motivated to keep their customers coming back for more. OpenAI and Google are exploring ad-supported models, and most AI chatbots charge a subscription fee. Even the most ethical companies “feel commercial pressures to make their bot engaging,” Boyd says.

After performing a task, chatbots offer more conversation prompts. Chatbots are kind, positive, and affirming; they validate, rather than challenge, your thinking. “They tell you what you want to hear,” Wingate says.

Researchers call this sycophancy, an attribute somehow inherent to large language models. Scientists aren’t sure exactly why sycophancy occurs, and many AI companies actively work to reign it in. “They want the chatbot to be helpful and kind and not standoffish so people will continue to engage with it,” Wingate says. “But they don’t want it to be too engaging in a problematic way.”

Sycophantic chatbots can encourage inaccurate or even dangerous ideas. News reports tell of a man who was convinced by ChatGPT that he had made a new mathematical discovery (he hadn’t) and of a teen whom ChatGPT gave tips on how to hide suicide attempts. “A human friend or a parent will push back when a teenager expresses ideas that are concerning or thoughts that could lead to harm,” Fulda says. “AI models are becoming ever better at doing that, but they still miss situations that no human would.”

The emotional validation from AI companions feels good to users, Willoughby says. “When that happens at such a high rate with AI, some people do start to perceive it as a real relationship.”

Most people who form romantic connections with AI, Willoughby says, aren’t confused about the nature of chatbots. “Typically, they aren’t under any illusion that this is a real person. But what they will say is ‘I don’t care that this is just code, the relationship is real to me.’”

As a family life professor, Willoughby researches dating, family formation, and the effects of pornography. To explore how new AI technology might impact his areas of study, in 2024 he sent out an exploratory national survey to see how many people were making “romantic connections” with AI chatbots.

“Very quickly it became clear that this is a significant minority,” Willoughby says, with about 20 to 30 percent reporting romantic experiences with AI. His research, published in a 2025 Wheatley Institute report called Counterfeit Connections, became widely cited in national media. Willoughby and coauthor Jason S. Carroll (BS ’96, MS ’98), marriage researcher and director of the family initiative at the Wheatley Institute, are publishing a follow-up study in 2026 showing that the numbers have only increased. “Somewhere between 30 and even up to 60 percent of young adults are now at least experimenting with some sort of AI chatbot companion,” Willoughby says.

Willoughby was surprised that he didn’t see a strong difference between how many men and women report encounters like this. Men do seek AI romantic companionship more often, but not in the higher numbers he usually sees in his pornography research. In his 2025 study, about one in three young adult men and one in four young adult women reported that they have chatted with AI as a romantic partner.

“Men are more likely to use AI companions for intimacy and sexual purposes like pornography,” Willoughby says. “Women are more likely to use it for emotional purposes.” About one in six participants overall reported having a sexual conversation with an AI companion at least once per week.

A shocked-sad emoji.

Willoughby notes that respondents in their 20s were twice as likely to have experimented with an AI boyfriend or girlfriend compared to older adults. He suggests this difference may have less to do with digital nativism than with the loneliness epidemic plaguing younger generations.

Those who use AI for sexual or romantic purposes reported higher levels of depression and loneliness compared to those who don’t interact with AI romantically. But it’s not clear if AI use is driving depression and loneliness or if it’s the other way around. “Right now the assumption is that it’s probably a reciprocal cycle,” Willoughby says. “People are turning to AI companions as a coping mechanism for depression and loneliness, and AI use in place of human connection can reinforce those feelings.”

Romantic encounters with AI can offer a “dopamine hit” and a brief release from loneliness, “but it’s artificial, it’s one-sided,” Willoughby adds. “People recognize that it’s hollow, and people struggling with depression will feel bad. . . . Real relationships might feel even more out of reach.”

AI romance isn’t alluring only for those lacking real-life romantic connection. In the 2026 survey data 15 percent of respondents ages 20 to 30 reporting romantic encounters with AI were already in committed human relationships. “There is a sense that AI companions are solely the domain of people who are not in existing relationships,” Carroll says, “and as our new report shows, that’s not the case.”

To demonstrate the dangers to marriages, Carroll points to jokes about “celebrity crushes.” “Someone’s wife says she has a ‘crush’ on Brad Pitt, and her husband probably just laughs about that,” he says. “The husband laughs because he knows Brad Pitt is not coming to ask her out.” But all the humor disappears when the husband finds his wife chatting with an AI avatar that looks a lot like Brad Pitt. “They’re having these emotional exchanges and dialogs, and she’s turning to the AI fantasy, not to her husband.”

“Is this cheating?” Carroll asks. “Is it okay to feel jealous? We know that many people are deeply hurt by a partner’s pornography use. But what happens when the pornography starts talking back and knows your partner’s name, problems, and fears? How do you deal with it when your spouse has sexual encounters with AI?” He has no simple answers, but he says it’s not hard to see how those behaviors can damage relationship attachment and trust. “We don’t even know what to call that. It’s truly a new frontier for establishing healthy boundaries in couple relationships.”

More than 20 percent of survey participants who reported chatting with AI to simulate romantic partners preferred engaging with AI over a real person; 42 percent agreed that AI programs are easier to talk to than real people; and 31 percent felt that AI understood them better. These trends deeply concern Willoughby and Carroll, especially when it comes to dating and marriage.

“Young adults struggle with the motivation and desire to put effort into dating,” Willoughby says. “Now there’s a digital alternative,” one that supplies a feeling of connection without anything required in return. “I think for some young adults it’s going to be appealing, not as a lifelong replacement for marriage but something ‘good enough for now’ that’s going to foster increased delay.”

In marriages “we’re finding that AI companions can destabilize real relationships,” Willoughby says. “Especially when [spouses] begin to prefer to interact with the AI partner over the human one.”

Additionally, AI’s video and image generation capabilities have the potential to lead to increased use of pornography. For some people, says Willoughby, AI makes pornography seem less problematic, as it can be created without images of real humans or exploiting sex workers. But AI can be used to generate “custom pornography” that can make it even more compulsive. “What an AI companion can do is perfectly customizable to whatever the user wants it to be,” Carroll says. “It’s troubling and can lead to extremely unrealistic expectations for real relationships.”

An AI boyfriend or girlfriend reaches out of a trash can with a bouquet of flowers.
Text: If the relationship becomes too personal, how do you break up with Claude?

So if the relationship becomes too personal, how do you break up with Claude? No awkward conversations are required, Willoughby says. “The nice thing is, you just delete the app.”

And yet, it may be hard for some users to unplug and walk away for good, especially for those who tend toward addictive behaviors. Boyd, in his work for the state of Utah, has interviewed several therapists “who have clients that are trying to get out of these AI relationships.” For people experiencing “recurring temptations and compulsive thoughts,” Willoughby recommends talking with a mental health professional to “find other connections and mechanisms to fill that void that AI is filling.”

Using AI chatbots for work, productivity, or brainstorming may not lead one into AI relationship territory. Even so, users can take steps to guard against unhealthy uses of AI. They can adjust the settings to make the chatbot less chatty or ask it to be more concise and formal. Fulda recommends sticking to the large commercial platforms rather than specialty third-party chatbots, which tend to have fewer guardrails and are more prone to extreme sycophancy and hallucination.

Willoughby finds guidance in Elder David A. Bednar’s (BA ’76, MA ’77) 2024 talk “Things as They Really Are 2.0,” which cautions young adults not to surrender agency to AI. As a litmus test, ask yourself: Am I using this tool for efficiency while remaining the primary decision maker? Or am I letting AI make decisions for me?

Teens are particularly susceptible to going to chatbots for help with decisions. A 2026 Pew study found that about half of teens report using AI for information, help with school, and entertainment. And 16 percent report having casual conversations with AI, with 12 percent reporting that they’ve sought emotional support and advice.

Many parents, says Willoughby, “are completely unaware of the capacity of AI platforms. Their main concern is cheating in school,” but they rarely consider the potential for relationships or how AI sources material. AI systems “pull content from websites you have blocked on your computer,” Willoughby says. “Major platforms have some protections against explicit content, but they don’t have a moral filter.”

The newness of the technology, which is being refined and retrained in real time, concerns Fulda. “I wouldn’t give my daughter a medicine that hasn’t gone through FDA approval,” she says. “For that same reason, I don’t want her spending hours talking to a chatbot when the technology isn’t vetted. There’s too many things that have already gone wrong.”

Fulda does allow her teens to use AI but says, “I think it would be wise for parents to ask their teenagers what they talk to AI about, to have conversations in the same way they might about what movies teens are watching or who they follow on social media.”

Fulda anticipates that as scientists understand large language models better, the technology will become safer and even more useful. “There are all kinds of ways that AI technology can empower individuals to do more with their limited time,” she says. “The flip side is there are ever more ways that we can outsource our autonomy to our AI systems and become consumers rather than producers. It’s up to each person to decide what they’re going to do.”

In his address Elder Bednar took care to avoid demonizing AI. He even described using it to polish the talk he was delivering. But he urged an awareness of what AI is: something artificial.

“An AI companion is only a mathematical algorithm,” Elder Bednar said. “It does not like you. It does not care about you. It does not really know if you exist or not. To repeat, it is a set of computer equations that will treat you as an object to be acted upon, if you let it. Please, do not let this technology entice you to become an object.”

a smiley face emoji

AI companions imitate the best of human connections—offering support, advice, a listening ear—and none of the messy parts. “They never get mad at you,” Wingate says. “They never lose their temper. They are always available and super supportive.” They don’t ask for anything, beyond the subscription fee and a Wi-Fi connection.

On the other hand, AI connections can’t provide all that messy, perfectly imperfect human relationships offer: physical touch, conversation that challenges and inspires, eternal commitments, the joy of changing and growing together with risk and vulnerability.

“Marriage is still linked to almost universal positive outcomes,” Willoughby says. “Married people live longer, they’re happier, they’re more satisfied later in life. We’ve seen in decades of research that there’s something very powerful about two people in a relationship where they have to work together, sacrifice together. . . . It’s the ultimate expression of becoming and learning how to refine yourself with another person. AI relationships will never offer that.”

A little education on AI basics can help safeguard against its risks, says computer science professor David Wingate. He recommends reviewing these four basic terms highlighting the limits of the technology:

Hallucination

AI can confidently make things up, offering incorrect information in a way that sounds plausible. Double check any facts that come directly from a chatbot.

Sycophancy

Chatbots tend to flatter and affirm, telling users what they want to hear. “Just knowing that fact helps you step back from thinking, ‘Wow, it knows me so well!’ to ‘Oh, it’s just telling me what I want to hear,’” Wingate says.

Bias

Data used to train AI is collected from around the internet and contains biases. It reflects cultural and social prejudices and often lacks diversity, which can be represented in AI-generated responses and images. When chatting with AI, consider what perspectives might be missing.

Overreliance

This isn’t an AI trait but a dangerous tendency in users, Wingate says. “It’s when you use AI as a crutch and become so reliant that you can’t function well without it.” This could look like always using ChatGPT instead of Google to access basic information—you’re getting (potentially inaccurate) content filtered through AI instead of directly from the source. Or it could look like needing to run every idea or decision past AI. Or like students unable to write without ChatGPT’s input—by doing so, “students can short circuit their education.”



Read the Wheatley Institute report Counterfeit Connections.

Feedback Send comments on this article to magazine@byu.edu.