Will AI “Deadbots” Change How We Grieve?Reaching out beyond the grave using AI
The rise of conversational AI tools has led to a “digital-afterlife industry” already valued in the billions, promising to replace the dead and the way they are remembered. The philosophical […]

The rise of conversational AI tools has led to a “digital-afterlife industry” already valued in the billions, promising to replace the dead and the way they are remembered.
The philosophical debates surrounding the explosion of artificial intelligence programs have mainly focused on big-picture elements: how AI might inadvertently amplify human biases, how AI programs lack transparency and accountability,and heated discussions about the potential for privacy and data use transgressions.
Most recently, a rapidly growing field has emerged that poses all kinds of ethical dilemmas: the “digital-afterlife industry,” devoted to building AI replicas of the dead. Formulated to serve as tools of comfort for the bereaved, some practitioners hope that this use of advanced technology might one day render grief itself obsolete.
The Rise of Deadbots
Interactive digital recreations of people who have died are already known by several different names: deathbots, thanabots, ghostbots, and griefbots. (Similarly, organic cells that exist in a “third state” between alive and dead have also developed “-bot” monikers: xenobots and anthrobots.)
Deadbots are AI systems designed to simulate the voices, speech patterns, and personalities of the deceased. They typically draw on a person’s digital footprint – voice recordings, text messages, emails, and social media posts – to create interactive avatars. Using a subset of AI known as machine learning, these avatars can evolve over time, improving with each repeated use.
The result – at least as of the writing of this article – is an eerie replica of someone’s dearly beloved; an uncanny-valley-esque digital clone that might not be as comforting as one might hope. As IE University wrote about a South Korean documentary in which a grieving mother was reunited with a digital recreation of her deceased daughter, “It was moving, and deeply unsettling.”
Deadbots as Bereavement Aids
In an article covering the digital afterlife, “Nature” magazine interviewed bioethicist and medical anthropologist at DePaul University, Craig Klugman, who explained that healthy grieving is thought to involve a person successfully cultivating an internal relationship with the person who has died: “Instead of interacting with the person, we interact with the mental representation of that person.” Eventually, Klugman continued, the initial devastation of losing that person will subside, because we will carry that part of the deceased within ourselves.
The theory behind griefbots is that they could ease that transition from an external relationship to an internal one, especially during the early stages of intense grief.

Deadbots hope to ease the grief process by digitally replicating our loved ones
In her February 2026 article on deadbots in The Atlantic, Charley Burlock disagreed, writing, “Griefbots give us the fantasy that we can maintain an external relationship with the deceased.” And as Sherry Turkle, the sociologist, psychologist and founding director of the MIT Initiative on Technology and Self, told Burlock that “in holding on, we can’t make them part of ourselves,” robbing us of that metabolization which would become an internal, sustaining presence.
The Digital Afterlife Industry Could Be Intensely Problematic
The market for people seeking a digital afterlife of one kind or another is ballooning, regardless of any ethical quandaries. The digital afterlife industry – which encompasses any management of a person’s digital assets after their death – is expected to quadruple in size to nearly $80 billion over the next decade, according to NPR.
Seemingly unimpeded by legal or moral inhibitions (so far), these kinds of AI programs could lead to more than just prematurely arrested grief processes, especially in the hands of less-than-scrupulous actors. As Burlock ominously reminds her readers, “behind these experiences lies a business model.”
Monetizing Your Grief
Digital afterlife companies are currently seeking innovative ways to monetize their product, including advertising placements ranging from benign to downright repugnant. You, Only Virtual (YOV), the digital afterlife company Burlock profiles in her article, is toying with making nonpaying users sit through a short ad before interacting with their dead one’s “Versona” (virtual persona, as YOV calls it.)
YOV’s creator Justin Harrison is also considering “integrating a marketing system into the interactions directly and having the bots drop targeted advertisements in the midst of their conversations,” he told Burlock.
Alex Quinn, the CEO of Authentic Interactions Inc (the parent company for the video-deadbot-generator StoryFile) told NPR that he is interested in the possibility of training the bots to “probe for information” that could be sold back to advertisers and that multiple companies were already testing those kinds of applications internally. Ultimately, his bottom line was that it needed to seem authentic – not to avoid being crass, but because he could risk turning consumers away.
Potential for Abuse
As it is, AI has already made it a relatively simple feat to “clone” someone’s voice and likeness, giving anyone with time and an internet-connected device the ability to perpetuate “deepfake” scams – resulting in over 4.2 million people reporting this kind of fraud to the FBI since 2020.

The allure of chatbots can be dangerous for some
Perhaps even worse, some people have turned to generative-AI chatbots like Character.AI innocently seeking companionship, with dire consequences. As Burlock reported, the Social Media Victims Law Center and Tech Justice Law Project filed seven simultaneous suits against OpenAI in November, 2025, alleging that ChatGPT conversations caused psychological breakdowns in six adults and one teenager. Of those seven plaintiffs, four had died from suicide.
“If the wrong words coming from a generalized or fictional character’s conversational chatbot can do such damage,” she wrote, “imagine the power of words spoken in the voice of a dead loved one to a user who is desperate enough to turn to such technology in the first place.”
Good Intentions Must Be Carefully Protected
It is easy to get lost in the potentially catastrophic turns this kind of technology could take in the near future, if only because of its innate unpredictability. However, many of the deadbots currently available were initially created because someone was desperately trying to cope with their own profound grief after a loss.
YOV’s Harrison was trying to avoid losing the version of himself that existed in relation to his Mom, who died of cancer during the development of his platform. Eugenia Kuyda founded Replika, one of the biggest AI-companionship apps, after the death of her close friend Roman Mazurenko.
If someone is having difficulty processing their grief in a healthy, meaningful way, it is possible that speaking with a deadbot could be beneficial – especially under the care and guidance of a trained professional like a therapist or grief specialist. But using this tool as a way to avoid the devastation of grief would be a disservice to someone who is already in a vulnerable state. Mourning a loved one is a fundamental human experience that cannot be bypassed, as much as we wish we could.





Leave a Reply