When Technology Meets Grief: Hard Questions About AI Avatars of the Deceased
I’ve been sitting with this one for a few days, and I’m still not sure I have it figured out.
A Forbes article crossed my feed recently about an AI-powered app that lets users create interactive avatars of deceased relatives. Upload a few minutes of video, and you can have real-time conversations with a digital version of someone you’ve lost. The technology is genuinely impressive. The reactions have been intense, ranging from “dystopian” to “beautiful” to everything in between.
My first instinct was to write something definitive. To draw clear lines about what’s ethical and what isn’t. But the more I thought about it, the more I realized that grief doesn’t work that way. And neither should our thinking about technology that touches something this personal.
Grief Doesn’t Follow a Script
Here’s what I keep coming back to: people process loss in radically different ways.
Some people find comfort in visiting a grave. Others never go. Some keep a loved one’s voicemail saved on their phone for years, playing it when they need to hear that voice again. Others can’t bear to listen. Some talk to photographs. Some write letters to people who will never read them.
None of these responses are wrong. They’re human.
So when I see technology that offers another way to stay connected to someone who’s gone, I can’t dismiss it outright. I can imagine the person who lost a parent before they got to say goodbye, or the child growing up without memories of a grandparent they never met. I can understand why the promise of one more conversation, even a simulated one, might feel like a lifeline.
The question isn’t whether this impulse is valid. It is. The question is whether this particular technology actually serves that impulse well, or whether it might cause harm we don’t fully understand yet.
We’re Building Faster Than We’re Learning
This is where I start to get uncomfortable.
We don’t really know how prolonged interaction with digital representations of deceased loved ones affects the grieving process. Does it provide genuine comfort, or does it interfere with the psychological work of processing loss? Does it help people hold onto meaningful memories, or does it gradually replace those memories with something artificial? Does it support healing, or does it create a new kind of dependency?
These aren’t rhetorical questions. We genuinely don’t know the answers. And that uncertainty matters when we’re talking about technology aimed at people in one of the most vulnerable states a human can experience.
The research on how people form relationships with digital humans is still emerging. We’re learning that these interactions can be surprisingly meaningful, that people do develop real emotional connections with AI systems, and that those connections can have both positive and negative effects depending on context and design.
But “surprising” and “real” don’t automatically mean “healthy” or “helpful.” Especially when grief is involved.
The Difference Between Comfort and Exploitation
I want to be careful here, because I don’t think the people building these tools are necessarily acting in bad faith. Many of them probably believe they’re creating something genuinely helpful. Technology founders often do.
But good intentions don’t guarantee good outcomes. And when your business model depends on people in acute emotional pain paying for a service that promises to ease that pain, you have an enormous responsibility to make sure you’re actually helping.
That means asking hard questions before you launch, not after. It means investing in research to understand the psychological effects of your product. It means being honest about limitations and risks. It means building in safeguards for the people most likely to be harmed.
What worries me about the current crop of grief-focused AI products isn’t that they exist. It’s that they seem to be moving very fast, making very big promises, and not spending much time on the questions that should come first.
“Keep your loved one alive forever” is a marketing claim, not a therapeutic one. And the gap between those two things is where real harm can happen.
What We Actually Know About Digital Humans
At CodeBaby, we’ve spent years studying how people interact with avatars and digital characters. We’ve seen firsthand that these interactions can be powerful, that people do open up to digital humans in ways they sometimes won’t with real ones, and that there’s genuine potential to support people through difficult experiences.
But we’ve also learned that this power requires restraint.
The most effective digital humans are the ones that are clear about what they are. They support and guide, but they don’t pretend to be something they’re not. They create safety through consistency and transparency, not through simulation of real relationships.
When we design avatars for healthcare or education, we’re not trying to replace human connection. We’re trying to prepare people for it, or extend access to it, or make it easier for human providers to focus on what only humans can do.
That’s a fundamentally different project than recreating a specific person who can no longer consent to being represented.
The Consent Question Nobody Wants to Talk About
This is the part that genuinely troubles me, even setting aside all the uncertainty about psychological effects.
When you create a digital avatar of someone who has died, you’re making decisions about their likeness, their voice, their mannerisms, and their words that they never agreed to. You’re putting language in their mouth that they never spoke. You’re creating interactions that they never chose to have.
Maybe some people would want this. Maybe they’d be glad to know their family could still “talk” to them after they’re gone. But we can’t know that unless they told us. And most of the people being digitally recreated never had the chance to weigh in.
This isn’t a new problem. We’ve always had to make decisions about how to represent and remember people who are no longer here to speak for themselves. But the interactive, conversational nature of these AI systems raises the stakes considerably. A photograph captures a moment. A video preserves a memory. An AI avatar that speaks and responds creates something new, something the person themselves never said or did.
That feels different to me. And I think we need to be honest about why.
What I’d Want to See
I’m not calling for bans or regulations here. I don’t think that’s the right approach, and I’m not sure it would work anyway.
What I’d want to see is more humility from the companies building these tools. More investment in understanding the psychological effects before scaling up. More transparency about what these products actually are and what they can’t do. More safeguards for users who might be making decisions while deep in grief. More acknowledgment that “we don’t know yet” is a legitimate and important answer.
I’d want to see the tech industry treat grief with the same care we’d want from a therapist or a doctor. Not because they’re legally required to, but because it’s the right thing to do when you’re building products for people at their most vulnerable.
And I’d want to see more research, from people outside these companies, about how digital representations of the deceased actually affect the people who interact with them. Not just in the short term, when the comfort of one more conversation feels overwhelming, but over months and years, as people try to integrate loss into their ongoing lives.
We Don’t Have to Have All the Answers
I started this piece by saying I wasn’t sure I had it figured out. I still don’t.
What I do know is that technology touching something as profound as grief deserves more than just technical sophistication. It deserves wisdom, caution, and a genuine commitment to understanding the humans it’s meant to serve.
Maybe these tools will turn out to be genuinely helpful for some people. Maybe they’ll provide comfort that couldn’t come any other way. I hope that’s true.
But hope isn’t a strategy. And the potential for harm, when we’re talking about vulnerable people and powerful emotional experiences, is too significant to brush aside in the rush to market.
The people grieving deserve better than to be early adopters in an experiment nobody fully understands.