24 x 7 World News

Should we use AI to resurrect digital ‘ghosts’ of the dead?

0

When missing a loved one who has passed away, you might look at old photos or listen to old voicemails. Now, with artificial intelligence technology, you can also talk with a virtual bot made to look and sound just like them.

The companies Silicon Intelligence and Super Brain already offer this service. Both rely on generative AI, including large language models similar to the one behind ChatGPT, to sift through snippets of text, photos, audio recordings, video and other data. They use this information to create digital “ghosts” of the dead to visit the living.

Called griefbots, deadbots or re-creation services, these digital replicas of the deceased “create an illusion that a dead person is still alive and can interact with the world as if nothing actually happened, as if death didn’t occur,” says Katarzyna Nowaczyk-Basińska, a researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge who studies how technology shapes people’s experiences of death, loss and grief.

She and colleague Tomasz Hollanek, a technology ethicist at the same university, recently explored the risks of technology that allows for a type of “digital immortality” in a paper published May 9 in Philosophy & Technology. Could AI technology be racing ahead of respect for human dignity? To get a handle on this, Science News spoke with Nowaczyk-Basińska. This interview has been edited for length and clarity.

SN: The TV show Black Mirror ran a chilling episode in 2013 about a woman who gets a robot that mimics her dead boyfriend. How realistic is that story?

Nowaczyk-Basińska: We are already here, without the body. But definitely the digital resurrection based on huge amount of data — that’s our reality.

In my relatively short academic career, I’ve been witnessing a significant shift from the point where digital immortality technologies were perceived as a very marginalized niche, to the point where we [have] the term “digital afterlife industry.” For me as a researcher, it’s fascinating. As a person, it can be very scary and very concerning.

We use speculative scenarios and design fiction as a research method. But we do not refer to some distant future. Instead, we speculate on what is technologically and socially possible here and now.

SN: Your paper presents three imaginary, yet problematic, scenarios that could arise with these deadbots. Which one do you personally find most dystopian?

Nowaczyk-Basińska: [In one of our scenarios], we present a terminally ill woman leaving a griefbot to assist her eight-year-old son with the grief process. We use this example because we think that exposing children to this technology might be very risky.

I think we could go even further and use this … in the near future to even conceal the fact of the death of a parent or the other significant relative from a child. And at the moment, we know very, very little about how these technologies would influence children.

We argue that if we can’t prove that this technology won’t be harmful, we should take all possible measures to protect the most vulnerable. And in this case, that would mean age-restricted access to these technologies.

Paren’t is an imagined company that offers to create a bot of a dying or deceased parent as a companion for a young child. But there are questions about whether this service could cause harm for children, who might not fully understand what the bot is.Tomasz Hollanek

SN: What other safeguards are important?

Nowaczyk-Basińska: We should make sure that the user is aware … that they are interacting with AI. [The technology can] simulate language patterns and personality traits based on processing huge amounts of personal data. But it’s definitely not a conscious entity (SN: 2/28/24). We also advocate for developing sensitive procedures of retiring or deleting deadbots. And we also highlight the significance of consent.

SN: Could you describe the scenario you imagined that explores consent for the bot user?

Nowaczyk-Basińska: We present an older person who secretly — that’s a very important word, secretly — committed to a deadbot of themselves, paying for a 20-year subscription, hoping it will comfort their adult children. And now just imagine that after the funeral, the children receive a bunch of emails, notifications or updates from the re-creation service, along with the invitation to interact with the bot of their deceased father.

[The children] should have a right to decide whether they want to go through the grieving process in this way or not. For some people, it might be comforting, and it might be helpful, but for others not.

SN: You also argue that it’s important to protect the dignity of human beings, even after death. In your imagined scenario about this issue, an adult woman uses a free service to create a deadbot of her long-deceased grandmother. What happens next?

Nowaczyk-Basińska: She asks the deadbot about the recipe for homemade carbonara spaghetti that she loved cooking with her grandmother. Instead of receiving the recipe, she receives a recommendation to order that food from a popular delivery service. Our concern is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory.

SN: Different cultures have very different ways of handling death. How can safeguards take this into account?

Nowaczyk-Basińska: We are very much aware that there is no universal ethical framework that could be developed here. The topics of death, grief and immortality are hugely culturally sensitive. And solutions that might be enthusiastically adopted in one cultural context could be completely dismissed in another. This year, I started a new research project: I’m aiming to explore different perceptions of AI-enabled simulation of death in three different eastern nations, including Poland, India and probably China.

SN: Why is now the time to act on this?

Nowaczyk-Basińska: When we started working on this paper a year ago, we were a bit concerned whether it’s too [much like] science fiction. Now [it’s] 2024. And with the advent of large language models, especially ChatGPT, these technologies are more accessible. That’s why we so desperately need regulations and safeguards.

Re-creation service providers today are making totally arbitrary decisions of what is acceptable or not. And it’s a bit risky to let commercial entities decide how our digital death and digital immortality should be shaped. People who decide to use digital technologies in end-of-life situations are already in a very, very difficult point in their lives. We shouldn’t make it harder for them through irresponsible technology design.

Leave a Reply