Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, March 14, 2017

A ghost in the machine

A couple of days ago I wrote a piece on a couple of studies that some people (unwarrantedly, in my opinion) are using as support of the claim that our consciousness will persist into an afterlife.  While the desire for death not to be final is completely understandable, the evidence we have of soul survival is at present equivocal at best.

But apparently there is another way people are trying to cheat the Grim Reaper: by creating a digital version of themselves, based upon their social media posts, texts, emails, and so on, that could then chat with their friends after their demise.

Lest you think this is just some bizarre speculation by a fiction author who has, I must admit, a rather febrile imagination at times, it's already been done.  An artificial intelligence researcher named Eugenia Kuyda created a chatbot based upon the tweets, texts, and Facebook posts of her friend Roman after he died in November of 2015, and she regularly has conversations with it.

CNN writer Laurie Segall spoke with Kuyda -- and also with Roman:
I had several long conversations with Roman -- or I should say his bot.  And while the technology wasn't perfect, it certainly captured what I imagine to be his ethos -- his humor, his fears, how hopeless he felt at work sometimes.  He had angst about doing something meaningful.  I learned he was lonely but was glad that he'd left Moscow for the West Coast. I learned we had similar tastes in music.  He seemed to like deep conversations, he was a bit sad, and you know he would've been fun on a night out.
As for Kuyda, she's gone even further down the rabbit hole.  She was at a party several weeks after creating the RomanBot, and had the surreal experience of texting with... it? him?... for thirty minutes before she remembered that Roman was dead, and this was a simulation.

So Segall asked Kuyda to create one for her.  A LaurieBot.  So she did, and Segall got to speak with Segall 2.0.  The experience, she said, was a little unnerving:
I was warm ... or at least my bot was. It responded like me -- quick, rapid fire texts. It loved Hamilton and Edward Sharpe and the Magnetic Zeros.  It was trying to get healthy. My bot made sexual comments and spoke about happiness. 
My bot was also brash, a bit combative.  It worried about being alone, had some trust issues.  It was crude.  A bit funny, thoughtful -- it was me on my best days ... and my worst. 
Then things got uncomfortable.  My bot started pushing back against Kuyda questioning. My trust issues were casually texted back to me...  It was unsettling how flippant my bot was with my emotions.
Although I have had a fascination with AI for years, I have some serious issues with this.  Mostly it revolves around the effects this could have on the grieving family and friends of the deceased.  We already, as a culture, have a hard enough time dealing with death, with letting go of someone we love.  This, to me, gives the bereaved nothing but the false sense that their friend or relative is still with them, prolonging the difficult journey toward acceptance, not only of death in the specific case but of mortality in general.

But does this mean that we've finally, quietly, crossed the line into having a piece of code that can pass the Turing test?  The fact that Kuyda herself, who wrote the damn thing, could forget for a half-hour that she was talking to a simulation, is pretty remarkable.  And if so, does that mean that there really is something there, some pared-down piece of the person's personality?  Are we reaching the point where there really will be a ghost in the machine?

[image courtesy of Alejandro Zorrilal Cruz and the Wikimedia Commons]

Segall clearly wasn't particularly sanguine about her own digital alter ego:
I have mixed feelings about it.  When I die, I don't know if I'd want to give people access to those parts of me -- unfiltered, without context, pulling from conversations meant only for one person. 
I'm not ready to let this digital version of myself into the world.  These are parts of me I didn't realize tech could capture.  The most human aspects of me, spoken back through Laurie bot, felt too strange, too real, too uncontrollable and perhaps too dangerous as we enter an age where tech has the incredible ability to evoke such raw emotion.
To which I can only say: amen.  While it might be intriguing, in a purely intellectual sense, for me to talk to a GordonBot, the idea that something like it could still be around after I die, talking to my friends and family, is a profoundly disturbing concept.  I hope that when it's my turn, my loved ones will be strong enough simply to say goodbye in some appropriate manner.

Like a Viking funeral.  Go to the beach, stick my body in a boat, set it on fire, and send it out to sea.  Followed by lots of music, dance, drinking, and debauchery.  That's the way I want to have my life celebrated. not by having some anemic version of me still hanging around that people can text to.  I hate texting in real life, I sure as hell don't want to do it once I'm dead.

1 comment:

  1. Creepy to me because you can only capture how someone interacted in one point in life. Which era would you capture?

    Any growth or evolution in viewpoint would be a simulation and might not reflect the person.

    How would someone's chat bot react to 9/11 if the person died before that event?