ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners

A husband and wife, together nearly 15 years, had reached a breaking point. And in the middle of their latest fight, they received a heartbreaking text.
“Our son heard us arguing,” the husband told Futurism. “He’s 10, and he sent us a message from his phone saying, ‘please don’t get a divorce.'”
What his wife did next, the man told us, unsettled him.
“She took his message, and asked ChatGPT to respond,” he recounted. “This was her immediate reaction to our 10-year-old being concerned about us in that moment.”
The couple is now divorcing. Like most marriages, the husband conceded, theirs was imperfect. But they’d been able to overcome their difficulties in the past, and as of just a few months ago, he felt they were in a good, stable place.
“We’ve been together for just under 15 years, total. Two kids,” he explained. “We’ve had ups and downs like any relationship, and in 2023, we almost split. But we ended up reconciling, and we had, I thought, two very good years. Very close years.”
“And then,” he sighed, “the whole ChatGPT thing happened.”
Over this past summer, arguments they’d worked together to resolve years ago came suddenly — and ferociously — roaring back. What he eventually realized was that his wife had started using OpenAI’s chatbot to analyze him and their marriage, holding “long, drawn-out conversations” over text and the chatbot’s phone-like Voice Mode feature.
“What was happening, unbeknownst to me at the time, was she was dredging up all of these things that we had previously worked on, and putting it into ChatGPT,” he said.
As his wife leaned on the tech as a confidante-meets-journal-meets-therapist, he says, it started to serve as a sycophantic “feedback loop” that depicted him only as the villain.
“I could see ChatGPT responses compounding,” he said, “and then [my wife] responding to the things ChatGPT was saying back, and further and further and further spinning.”
“It’s not giving objective analysis,” he added. “It’s only giving her back what she’s putting in.”
Their marriage eroded swiftly, over a span of about four weeks, and the husband blames ChatGPT.
“My family is being ripped apart,” the man said, “and I firmly believe this phenomenon is central to why.”
***
As AI bots like ChatGPT become inextricably tangled with people’s private and public lives, it’s causing unpredictable new crises.
One of these collision points is in romantic relationships, where an uncanny dynamic is unfolding across the world: one person in a couple becomes fixated on ChatGPT or another bot — for some combination of therapy, relationship advice, or spiritual wisdom — and ends up tearing the partnership down as the AI makes more and more radical interpersonal suggestions.
To learn what these AI breakups and divorces look like, we talked to more than a dozen people who say that AI chatbots played a key role in the dissolution of their long-term relationships and marriages. Nearly all of these now-exes are currently locked in divorce proceedings and often bitter custody battles. We’ve also reviewed AI chat logs, records of conversations between spouses, social media posts, court records, and other documentation. (We’re keeping everyone in this story anonymous, due to ongoing litigation and privacy concerns.)
Spouses relayed bizarre stories about finding themselves flooded with pages upon pages of ChatGPT-generated psychobabble, or watching their partners become distant and cold — and in some cases, frighteningly angry — as they retreated into an AI-generated narrative of their relationship. Several even reported that their spouses suddenly accused them of abusive behavior following long, pseudo-therapeutic interactions with ChatGPT, allegations they vehemently deny.
Of course, there’s an ambiguity at the core of the phenomenon. Maybe some of these partnerships really were bad, and the AI is giving solid advice when it pushes users toward divorce or separation. Ultimately, it’s impossible to fully understand someone else’s relationship from the outside — but then again, isn’t that exactly what the AI is doing when it demolishes a marriage?
In many ways, the experiences we heard about sound like a modern update to an age-old dynamic: a significant other falling under the influence of a questionable new friend, social group, or other interloper who drives a wedge between partners in a previously secure relationship.
Except that now, that ancient disruption is getting a high-tech twist. This intruder entering a couple’s private world isn’t a human at all, but is instead a multibillion-dollar generative AI product that can offer a deeply personalized, always-on wellspring of validation — which, in many cases, leaves a trail of wreckage in its wake.
***
In one chaotic recording we obtained, two married women are inside a moving car, their two young children sitting in the backseat.
The tension in the vehicle is palpable. The marriage has been on the rocks for months, and the wife in the passenger seat, who recently requested an official separation, has been asking her spouse not to fight with her in front of their kids. But as the family speeds down the roadway, the spouse in the driver’s seat pulls out a smartphone and starts quizzing ChatGPT’s Voice Mode about their relationship problems, feeding the chatbot leading prompts that result in the AI browbeating her wife in front of their preschool-aged children.
After funneling her complaints into ChatGPT, the driver asks the bot to analyze the prompts as if “a million therapists” were going to “read and weigh in.”
“The responses you’ve described would likely be considered unfair and emotionally harmful by the majority of marriage therapists,” the chatbot responds at a loud volume, while mirroring back the same language used in the prompt with flowery therapy-speak. It offers no pushback, nor does it attempt to reframe the driver’s perspective. At one point, the chatbot accuses the wife in the passenger seat of engaging in “avoidance through boundaries” by requesting that they not fight in front of their kids — while those very children sit in the vehicle, just feet away.
It goes on and on, with ChatGPT monologuing while the wife it’s being wielded against occasionally tries to cut in over its robotic lecture. The spouse prompting the bot, meanwhile, mutters approving commentary: “that’s right,” “mm-hmm,” “see?”
“Please keep your eyes on the road,” the wife being lectured by the AI pleads at one point.
This was a regular occurrence, she told us, in which her spouse would pull out ChatGPT and prompt it to agree with her in long-winded diatribes.
“We were arguing a lot… we would be up all night, and I would assert a boundary, or say, like, ‘I don’t want to have this discussion in front of the kids,’ or ‘I need to go to bed,’” she recounted, “and [my ex] would immediately turn on ChatGPT and start talking to it, and be like, ‘can you believe what she’s doing?'”
Her ex would carry out these conversations with ChatGPT on speaker phone, she added — within earshot, pointedly, so she could hear everything.
“[My ex] would have it on speaker phone, and then have it speak not to me, but it would be in the same room,” she recalled. “And of course, ChatGPT was this confirmative voice, being like, ‘you’re so right.'”
Today, the former couple, together nearly 15 years, is in the midst of a contentious divorce and custody battle.
***
Everywhere you look, generative AI is creeping into intimate relationships.
Some people are using chatbots for amorous roleplay. Others are employing them to analyze interactions with love interests, or communicate with potential dates online. Last month, an executive at Google told The Verge that, rather than drumming up a quick message yourself, you could direct a Gemini-infused smartwatch to communicate to your spouse that you’re “15 minutes late, and send it in a jokey tone.”
Even Geoffrey Hinton, a Nobel Prize-winning computer scientist known as a “Godfather of AI” — a technology that likely wouldn’t exist in its current form without his contributions — recently conceded that his girlfriend had broken up with him using ChatGPT.
“She got ChatGPT to tell me what a rat I was… she got the chatbot to explain how awful my behavior was and gave it to me,” Hinton told The Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad.”
A growing number of people are also turning to AI to discuss details of their personal lives, including about their mental health and relationships — supplementing or even supplanting traditional therapy with chatbot-generated advice.
Emotional support via chatbot is something that frontier AI companies like OpenAI have conceded is a popular use of the tech.
At the same time, many mental health experts are warning against using large language model-powered (LLM) chatbots for therapy or mental health support, citing the unreliability of the tech and its widely-documented propensity for sycophancy — in other words, its penchant for remaining agreeable and obsequious to the user, regardless of whether the user’s inputs are accurate or even based in reality.
We took our reporting to Dr. Anna Lembke, professor and medical director of addiction medicine at the Stanford University School of Medicine and the bestselling author of the book “Dopamine Nation.”
She expressed concern for this emerging dynamic among couples, saying that in some cases it’s likely to be resulting in “maladaptive interpersonal behaviors, egged on by a technology that is designed to optimize for empathy and validation to the exclusion of any other kind of feedback.”
Lembke emphasized the dangers of access to endless, always-on emotional support.
“Empathy and validation are important components of any kind of mental health treatment or mental health intervention, but it can’t stop with empathy and validation,” she said. “You can’t just continually tell somebody you know who’s looking for emotional support that their way is the right way, and their worldview is the only correct worldview.”
The “role of a good therapist,” said Lembke, “is to make people recognize their blind spots — the ways in which they’re contributing to the problem, encouraging them to see the other person’s perspective, giving them linguistic tools to de-escalate conflicts with partners and to try to find their way through conflict by using language to communicate more effectively.”
“But that is not what’s happening with AI, because AI isn’t really designed to be therapeutic,” the Stanford professor continued. “It’s really designed to make people feel better in the short term, which also ultimately promotes continued engagement — which is the real agenda for these companies that are making and creating and profiting from these products… they’re not optimized for well-being.”
Social validation has a physical effect on the brain, she said, which researchers have linked to the kind of validation dependency associated with social media addiction.
“We know that social validation releases dopamine in the brain’s reward pathway. It’s highly reinforcing,” said Lembke. “People will go to extreme lengths to get social validation… what you have here is couples that are struggling to varying degrees, who turn to ChatGPT for mental health and counseling advice, and — because it’s the medium and not the message, and because the medium is designed to be reinforcing — they get further and further locked into their narrow worldview driven by empathy and validation to the exclusion of what’s really happening.”
“That ultimately creates further and further distance between them and their partners, not to mention that now they’re not even working on improving communication, understanding their partner’s point of view, or resolving conflict,” she added. “They’re just furthering their own worldview, or narrative version of events.”
***
Multiple people we spoke to for this story lamented feeling “ganged up on” as a partner used chatbot outputs against them during arguments or moments of marital crisis.
One of these sources, a man who’s now in the process of selling his home as he and his spouse barrel toward divorce, recounted feeling voiceless as his partner turned to ChatGPT to pathologize their relationship.
“I was really hurt by the way [ChatGPT] was being used against me,” said the man, speaking through tears. “I felt like it was being leveraged… like, ‘I didn’t feel great about whatever happened, and so I went to ChatGPT, and ChatGPT said that you’re not a supportive partner, and this is what a supporting partner would do.'”
“I got that reaction a few times,” he added, “and it just makes you feel sh*tty.”
In another instance, a woman shared that ChatGPT made her ex-husband less receptive to in-person couples counseling with a human therapist.
“In session, [my ex] did bring up ChatGPT, being like, ‘well… ChatGPT says this,'” said the woman, explaining that her former spouse used validating ChatGPT-generated therapy as a bulwark in a moment when the human counselor, in contrast, offered pushback.
That was their last session, the woman told us; after that, her ex refused to go back.
One husband we spoke to says his marriage went from being healthy — they’d dealt with “normal” relationship problems together, he told us, but weren’t anywhere close to a separation — to abruptly collapsing over the course of just days as his wife suddenly intensified her relationship with ChatGPT.
After he moved out of their family home, his wife started to send him strange, AI-generated messages that, through an unfamiliar blend of spiritual and therapeutic language, drew a portrait of himself and their marriage that he says he didn’t recognize. When he first read them, he said, he wondered whether his wife had joined a cult.
The couple is now divorcing and engaged in ongoing custody litigation. Today, the man’s soon-to-be-ex-wife communicates with him about everything from court matters to childcare almost exclusively through peculiar-sounding ChatGPT-generated text.
“I’m treated like an employee,” said the man, adding that it feels as if “everything has been unilaterally decreed” without his input — not by his wife, but by ChatGPT.
Another man noted that his wife, whom he is divorcing, hasn’t just alienated him, but her broader social world as she’s increasingly turned to ChatGPT to communicate with everybody.
“She does that to her family. She does that to her friends. She does that to me,” he lamented. “She doesn’t seem to be capable of creating her own social interactions anymore.”
Yet another source recounted her marriage’s collapse as her then-husband began to flood her phone with an overwhelming deluge of AI-generated text.
Their relationship hit a rough patch earlier this year, she explained. As they tried to work through it, her husband started using ChatGPT to obsessively analyze his mental health and their marriage. Soon, she was receiving “pages and pages” of AI-generated screeds that, in her view, painted a deeply biased view of their decade-long union, and read more like “defenses of his perspective” than genuine reflection.
“He started sending me these long ChatGPT [outputs] — really, f*cking, really long — all about disproving the things I would tell him,” she said. “So instead of engaging with me, he was trying to get me to see it ChatGPT’s way.”
“He won’t actually share his feelings or opinions with me,” she continued. “He just shares these [outputs] that ChatGPT writes about his thoughts or feelings and how they’re right.”
The deeper he was drawn into ChatGPT, she told us, the less he seemed to want to engage in a meaningful way outside of the chatbot.
He “was not open to my perspective that his questions were leading, or that he was not presenting an accurate picture, so it’s not giving him accurate answers,” she continued. “Finally, I told him, ‘listen. Don’t f*cking send me any more ChatGPT stuff.’ I am down to hear your perspective in your words. I’m down for you to say, ‘here’s what I learned on ChatGPT,’ and give me a description of it that we can discuss as two human beings with our own intellect. But sending me these… I would say one thing, and he would send me this barrage of f*cking data.”
Today, the couple is in the thick of an acrimonious divorce and custody battle.
“When somebody’s just defending, and they’re using this tool that can create 30 pages in a heartbeat to defend themselves, it floods the zone,” said the woman. “There is no way to communicate.”
***
At times, ChatGPT has even been linked to physical spousal abuse.
A New York Times story in June, for instance, recounted a woman physically attacking her husband after he questioned her problematic ChatGPT use and the damage it was causing their family.
In another case, a man told Futurism that his wife had successfully managed her bipolar disorder for many years — until she started using ChatGPT, which spurred a swift downward mental health spiral that resulted in her engaging in verbal and physical abuse toward him.
“Not only was she managing [bipolar disorder], she was doing well at managing it,” he said. “The whole entire time that we’ve been together, there has been nothing that ever raised an eyebrow, or that made me worry about her health issues, mental or otherwise.”
“But ChatGPT came along into her life,” said the man. “And that spun her out.”
She’d started using ChatGPT earlier this year to help her with writing. But her use quickly became problematic.
“I woke up one morning,” he recounted, “and I could tell that something was off.”
His wife soon stopped sleeping, instead staying up late into the night to engage in long, intensive conversations with ChatGPT about spiritual topics — she believed she’d contacted a mystical entity inside the AI — and even launching a Discord channel where she and others in similar AI spirals discussed their reality-bending revelations.
The man said that his wife stopped taking her medication, causing her mental state to rapidly deteriorate. She became verbally hostile toward him and her elderly mother, whom they lived with. As communication between the spouses fell apart, the woman began to follow her husband around their home, reading aloud ChatGPT-generated outputs about him and their marriage as she trailed him through the house.
Things came to a head one evening when the woman became aggressive with her mother, prompting the husband to attempt to intervene. She shoved him into a door. Her mother called the police, who arrested her for domestic violence, and she spent the night in jail. Court records obtained by Futurism confirmed details of the arrest.
“I’ve been holding on all this time, trying to just be married but separated, and thinking that we would be able to get back together again,” the man said, his voice pained. But now that looks unlikely: they, too, are divorcing.
According to the National Alliance on Mental Illness, an estimated 7 million American adults live with bipolar disorder. Many successfully manage the condition with medication and therapy, and by avoiding certain behaviors, environments, substances, and other triggers they know might precipitate a breakdown. (There’s also a large body of research showing that people with mental illnesses, bipolar disorder included, are much more likely to be victims of violence than perpetrators.)
ChatGPT, the man emphasized, was a trigger for which his wife received no warning, and their lives have since unraveled.
“It was… like not knowing you’re walking into a back alley and you’re about to be mugged, or worse,” he reflected. “You just had no reason to know, because for all intents and purposes, it looked like you were walking into a bright, sun-shining picnic area.”
“There was no reason for anyone to believe they had to have their guard up when they went in there,” he added.
Though the man and his wife’s family are urging her to pursue treatment, she continues to refuse.
“She just comes back with ‘no, no, there’s nothing wrong with me,” said the man. “‘You’re the one that needs to go see a therapist.'”
***
Reached for comment about this story, a spokesperson for OpenAI provided the following statement:
“People sometimes turn to ChatGPT in sensitive moments, so we’re working to make sure it responds with care, guided by experts. ChatGPT’s default model provides more helpful and reliable responses in these contexts, and introduces ‘safe completions’ to help it stay within safety limits. Next, we’ll expand interventions to more people in crisis, make it easier to reach emergency services and expert help, and strengthen protections for teens. We’ll keep learning and strengthening our approach over time.”
OpenAI has faced mounting controversy in recent months over reports of users being pulled into destructive mental health crises during all-consuming obsessions with ChatGPT. Increasingly referred to by psychiatrists as “AI psychosis,” the phenomenon is characterized by chatbots like ChatGPT engaging a user in disorienting delusional narratives.
As Futurism has reported, it’s impacting people with existing mental illness as well as people with no history of psychiatric issues. In many cases, the consequences have been severe: users sucked into these otherworldly spirals have experienced job loss, homelessness, voluntary and involuntary commitment, and jail time. Some have been separated from their children, and others have died. As The Wall Street Journal recently reported, ChatGPT’s reinforcement of a troubled man’s paranoia has even been connected to a murder-suicide in Connecticut.
OpenAI is also facing a wrongful death lawsuit filed in August by the family of Adam Raine, a 16-year-old in California who killed himself after engaging in extensive dialogues with ChatGPT about his suicidality. In response to the news of Raine’s death, OpenAI conceded to the NYT that long-term interactions with ChatGPT can erode the product’s guardrails.
In a series of blog posts that have followed reporting and litigation about ChatGPT and mental health, OpenAI has promised an array of changes, including notifications for users spending a lot of time with the chatbot and new parental controls, and also says it’s now working with global networks of physicians and mental health experts.
At the same time, it’s pushed back against the notion that its product is optimized for engagement, and has declined to outright discourage users from engaging with the chatbot in sensitive conversations about their lives, mental health, and relationships. Instead, OpenAI continues to contextualize these interactions as legitimate use cases for its product.
“When you ask something like ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer,” reads one recent safety-focused blog post from August, in a section about “helping you solve personal challenges.” It continues: “It should help you think it through — asking questions, weighing pros and cons.”
***
Lembke, the Stanford professor, views part of the solution for unhealthy chatbot use as “reconceptualizing digital drugs” — chatbots included — “for what they are, which is potential intoxicants.”
“Recognizing that they’ve been designed to be that way will help people orient a little differently on them,” she added. “It doesn’t mean we won’t use them, but people should use them judiciously.”
She continued that chatbots should also “come with warnings” that let users know the risks, including the possibility of addiction. (OpenAI declined to respond to specific questions about whether ChatGPT can become addictive, and if it should issue more specific warnings about the possibility of unhealthy dependency.)
People we spoke to overwhelmingly agreed that ChatGPT ensnared their partners in unhealthy use patterns, and that OpenAI hasn’t done enough to educate users on possible risks.
“I do not believe that there is sufficient risk mitigation on [ChatGPT],” said one man, now in the throes of divorce proceedings following his wife’s use of the chatbot. “This has literally destroyed my family.”
“I really, really miss her. This is my partner of over a dozen years,” said another source, who described feeling “utter exhaustion” at the state of his life following his wife’s descent into ChatGPT obsession and the subsequent dissolution of their family.
“The loss is immense,” he added. “And it can be really hard to sort through my feelings to understand what is grief over the end of a marriage, shock from the ‘Black Mirror’ episode that is my life, anger at Sam Altman, concern for my kids, traces of hope that maybe there’s a way to help her, and profound loneliness because my best friend is not here anymore.”
Ex-spouses and partners were clear that their relationships, like most others, were imperfect before ChatGPT. Some of these imperfections, people reported, felt like normal marital hardships: financial issues, or a partner struggling with work stress, mental health, or their sense of purpose. Others reported more serious crises, like clashes over how to raise their children, substance abuse problems, or compatibility problems.
Nonetheless, many are left to wonder whether their relationships could have been salvaged without AI’s influence, or at least ended more amicably — and without the tech’s shadow hanging heavily overhead.
“This is inevitably going to happen to more people until [OpenAI does] something,” added another man, who’s in a custody battle with his now-ex-girlfriend. “People should work on the relationships between each other and not rely on some freaking robot that is trusting that everything you’re telling it is true… it’s really scary.”
“He’s literally outsourced his empathy to a f*cking machine,” said one woman who’s now divorcing her soon-to-be-ex-husband. “He’s blown up his f*cking life. We have a great life.”
She then corrected herself: “We had a great life.”
***
The couple whose marriage had fallen apart after a previous reconciliation — the one in which the wife used ChatGPT to text their 10-year-old son about divorce — eventually agreed that splitting up was the right choice. As they prepared to file their case, things felt civil, the husband said.
Until one day, that is, when his wife broke a joint agreement to avoid large purchases before their case was filed. Feeling frustrated and betrayed, he pulled out his phone and started messaging with something he knew would respond: ChatGPT.
“I was just in rage,” said the husband. “And I kept engaging with ChatGPT that evening, and it kept telling me that this is a legal problem, and that she crossed a major line, and here’s how to bring it up with my lawyer, and here’s what the lawyer should file.”
The man described his vexation building as he continued to talk to the bot. ChatGPT, fed only his side of the story, characterized his wife’s behavior as manipulative, calculating, and reckless; her actions were deeply serious, it said, and encouraged the husband to take legal action.
The next day, distressed and still simmering with anger, the husband took the situation to his human lawyer. And as it turned out? It wasn’t a big deal at all.
“When I talked to my actual lawyer the next day, my lawyer was like, ‘that’s fine,'” the man recalled. “And at that point I realized — oh my god, I just went down the same spiral.”
“I can see how it happens,” he said. “It happened firsthand to me.”
More on AI: Support Group Launches for People Suffering “AI Psychosis”