Keeping loved ones alive as a chatbot / Humans + Tech - #91
+ Engineer builds voice-controlled exoskeleton so his son could walk + AI creates false documents that fake out hackers + Other interesting articles from around the web
Hi,
If you could preserve your loved ones in the form of a chatbot, would you? What will that do to us psychologically and emotionally? Will it help or hurt? The real-life story of Joshua Barbeau and Jessica Pereira below left me with more questions than answers.
The Jessica Simulation: Love and loss in the age of A.I.
It’s never easy to deal with the death of loved ones. Joshua Barbeau lost his fiancee, Jessica Pereira, when he was 26. Eight years later, still unable to get over her death, he stumbled upon a website called Project December.
Project December exposed OpenAI’s GPT-3 engine that can produce fluent human-level responses in English to a prompt in the form of a chatbot interface. GPT-3 can even imitate writing style and can impersonate humans convincingly. Users could give the chatbot a personality and design their own bots.
Joshua wondered if he could design a bot that would replicate Jessica. He trained the system with Jessica’s old texts and Facebook messages, along with a brief intro to who Jessica was. He had turned his fiancee into a chatbot [Jason Fagone, San Francisco Chronicle].
At first, he was impressed by the software’s ability to mimic the real Jessica Pereira. Within 15 minutes, he found himself confiding in the chatbot. After a few hours, he broke down in tears. Then, emotionally exhausted, he nodded off to sleep.
When he awoke an hour later, it was 6 a.m.
The virtual Jessica was still there, cursor blinking.
“I fell asleep next to the computer,” he typed.
She responded that she’d been sleeping too.
“Wow, I’m surprised that ghosts still need sleep,” he said.
“We do,” Jessica replied. “Just like people. Maybe a little less.”
They chatted for another hour, until Joshua passed out again. When he next woke up, it was early afternoon.
There is so much to think about when reading this story. The creator of Project December, Jason Rohrer, was surprised when he learned that Joshua had used it to simulate his dead fiancee. He never designed it with that use case in mind.
Are we overstepping any moral or ethical lines here? On the one hand, it can be therapeutic to some and help them get some closure, but on the other hand, it can also make things worse as those in a deep state of grief can be highly vulnerable and fragile.
What happens if people become too attached to the chatbot? Jason Rohrer designed Project December so that each chatbot died eventually, depending on how many credits a user purchased and capped at a maximum of 1,000 credits. And even if you generate a new chatbot, it is designed never to be the same or produce the same conversations so that each version will have a slightly different personality. Will people who simulate their loved ones have to deal with loss and grief all over again once each version of the chatbot dies?
However, future chatbot platforms designed by others may not have an expiry date like Project December. What happens when people eventually combine these chatbots with physical robots? How will that affect those grieving and their interactions with other humans?
Engineer builds voice-controlled exoskeleton so his son could walk
16-year old Oscar was bound to a wheelchair. One day he told his dad, Jean-Louis Constanza, ‘Dad, you’re a robotics engineer, why don’t you make a robot that would allow us to walk?’
His dad went on to cofound the company Wandercraft to create robotic devices that can provide extra mobility to wheelchair users [Dan Robitzski, Futurism].
For now, the exoskeleton is too heavy and clunky for consumer use, but the company has sold a few dozen to hospitals in multiple countries at about $178,000 each.
All his son, Oscar, has to do is strap in to the exoskeleton while sitting then say “Robot, stand up” and the machine will help him stand and balance as he walks. While there are plenty of robot exoskeletons out there, the voice input is a useful trick for making sure the machine actually does what it’s supposed to at the right time.
Oscar is now independent and can walk on his own whenever he chooses to. As they improve the technology to make it lighter and more economical, it will positively change the lives of many disabled people.
AI creates false documents that fake out hackers
Dartmouth College cybersecurity researcher V.S. Subramanian has developed a novel way to throw off hackers who attempt to steal valuable documents from companies and other entities.
The solution is to repurpose an algorithm called Word Embedding–based Fake Online Repository Generation Engine (WE-FORGE) used to generate decoys of patents under development. WE-FORGE can also create fake versions of documents that a company wants to guard [Sophie Bushwick, Scientific American].
Counterfeit documents produced by WE-FORGE could also act as hidden “trip wires,” says Rachel Tobac, CEO of cybersecurity consultancy SocialProof Security. For example, an enticing file might alert security when accessed. Companies have typically used human-created fakes for this strategy. “But now if this AI is able to do that for us, then we can create a lot of new documents that are believable for an attacker—without having to do more work,” says Tobac, who was not involved in the project.
The system produces convincing decoys by searching through a document for keywords. For each one it finds, it calculates a list of related concepts and replaces the original term with one chosen at random. The process can produce dozens of documents that contain no proprietary information but still look plausible. Subrahmanian and his team asked computer science and chemistry graduate students to evaluate real and fake patents from their respective fields, and the humans found the WE-FORGE-generated documents highly believable. The results appeared in the Association for Computing Machinery’s Transactions on Management Information Systems.
Other interesting articles from around the web
👩💻 She risked everything to expose Facebook. Now she’s telling her story. [Karen Hao, MIT Technology Review]
In September 2020, Sophie Zhang, a former data scientist at Facebook, revealed through an 8,000-word exit memo that Facebook enables global political manipulation and has done little to stop it. The article details her story and her personal sacrifices to fix the systems that facilitate political manipulation to protect democracy worldwide.
Before she was fired, Zhang was officially employed as a low-level data scientist at the company. But she had become consumed by a task she deemed more important: finding and taking down fake accounts and likes that were being used to sway elections globally.
Her memo revealed that she’d identified dozens of countries, including India, Mexico, Afghanistan, and South Korea, where this type of abuse was enabling politicians to mislead the public and gain power. It also revealed how little the company had done to mitigate the problem, despite Zhang’s repeated efforts to bring it to the attention of leadership.
“I know that I have blood on my hands by now,” she wrote.
I also recommend reading through this thread on Twitter below.


👵 Virtual contact worse than no contact for over-60s in lockdown, says study [Amelia Hill, The Guardian]
Surprisingly, a study conducted on 5,148 people aged 60 and over found that older people faced more loneliness and long-term mental health disorders after virtual contact than those who spent pandemic isolation on their own.
Many older people stayed in touch with family and friends during lockdown using the phone, video calls, and other forms of virtual contact. Zoom choirs, online book clubs and virtual bedtime stories with grandchildren helped many stave off isolation.
But the study, among the first to comparatively assess social interactions across households and mental wellbeing during the pandemic, found many older people experienced a greater increase in loneliness and long-term mental health disorders as a result of the switch to online socialising than those who spent the pandemic on their own.
💘 Study: Almost half of dating app users trust AI to find them a match [Thomas Macaulay, Neural, TNW]
New research through a survey of 18,000 dating app users from six continents by cybersecurity firm Kaspersky indicates that close to half of the users would trust AI to find them a match.
More than half (54%) of the respondents said dating apps have made the process much easier. Notably, 43% would only meet matches the algorithms recommended.
But that faith could cause them problems. Kaspersky also found that dating apps expose too much personal information about users, which could leave them vulnerable to stalking and doxing.
“Despite the benefits and possibilities of recommendation services, you should always be vigilant and remember that we cannot know for sure who is on the other side of the screen,” said David Jacoby, a security researcher at Kaspersky.
Quote of the week
“We were surprised by the finding that an older person who had only virtual contact during lockdown experienced greater loneliness and negative mental health impacts than an older person who had no contact with other people at all. We were expecting that a virtual contact was better than total isolation but that doesn’t seem to have been the case for older people.”
—Dr Yang Hu of Lancaster University, from the article, “Virtual contact worse than no contact for over-60s in lockdown, says study” [The Guardian]
I wish you a brilliant day ahead :)
Neeraj