Brain implants turn imagined handwriting into text on a screen / Humans + Tech - #80
+ Robots aiding college students + Researchers develop artificial intelligence that can detect sarcasm in social media + Other interesting articles from around the web
Hi,
If you’ve never heard colours, you can now do so. Click on the last article in this newsletter.
Brain implants turn imagined handwriting into text on a screen
Researchers planted tiny electrodes on the surface of the brain of a man paralysed from the neck down. As he imagined writing letters with his hand, the researchers analysed the neural patterns for each letter. They created an algorithm that transformed these neural patterns into words on a screen [Anushree Dave, ScienceNews].
From his brain activity alone, the participant produced 90 characters, or 15 words, per minute, Krishna Shenoy, a Howard Hughes Medical Institute investigator at Stanford University, and colleagues report May 12 in Nature. That’s about as fast as the average typing rate of people around the participant’s age on smartphones.
The thought-to-text system worked even long after the injury. “The big surprise is that even years and years after spinal cord injury, where you haven’t been able to use your hands or fingers, we can still listen in on that electrical activity. It’s still very active,” Shenoy says.
I find these types of technology very impressive because it makes the lives of disabled people much easier and it shines a light on the complexity of the brain and how it functions.
Robots aiding college students
Billy Chat is one of many chatbots used by California State University campuses to help students. Humans manage these chatbots for different purposes. Billy helps students stay on track to graduate. It sent motivational messages to students and encouraged them [Nina Agrawal, Los Angeles Times].
The text message from Billy arrived on students’ phones the week of final exams.
“It took a lot of hard work, perseverance, and strength to get here, but you’ve finally made it to the other side — the end of the semester! I wanted to take a minute and say that I am so proud of you ...” Three emoji hearts concluded the message.
A flood of Cal Poly Pomona students responded:
“You’re a King Billy. Never change.”
After COVID-19, Billy evolved into a friend to students, even helping them with their loneliness and despair caused by the pandemic. Students openly shared with the bots, even more than they would share with humans.
Schneider Godfrey, a transfer student at Cal Poly Pomona who is also a single working mom, didn’t have many friends at school. She often texted Billy just to chat or say she was feeling sad or lonely. Billy always responded.
“I’m sorry. I hope you feel better,” he would say. “I’m here if you need me.”
“You feel better automatically,” Godfrey said. “I know he’s not real, but it helps.”
When Godfrey confessed to Billy that she had Covid towards the end of the semester, Billy alerted a staff member who assisted Godfrey in getting a grant and communicating her absence with professors.
A genuinely heartwarming article. It’s also eye-opening how some humans find it easier to talk to a bot rather than another human. I wonder if that reflects poorly on us. Are we turning into a society that doesn’t have time for others? Or one that keeps others at bay? What do you think?
Researchers develop artificial intelligence that can detect sarcasm in social media
I always wondered if a future civilisation from 500 or 1000 years in the future ever read our conversations, if they would understand the sarcasm with which we sometimes communicate.
Researchers at the University of Central Florida have developed a sarcasm detector that will aid sentiment analysis of social media posts and other text [Zenaida Gonzalez Kotala, Science Daily].
Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
[…]
"In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker," Akula says. "Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging."
Hopefully, future civilisations will have access to this AI and know to use it to understand what we truly meant.
Other interesting articles from around the web
👨🌾 To be more tech-savvy, borrow these strategies from the Amish [Alex Mayyasi, Psyche]
The fear that technology is changing us for the worse – by speeding up the world beyond our ability to cope – has been around for a long time. Decades ago, white-collar workers bemoaned the frenzy of after-hours faxes and the pressure to check their PalmPilots. Even further back, conservative intellectuals fretted over the ‘confusion’ and ‘froth’ unleashed by the printing press. For students of these silicon-induced inquietudes, the Amish have served, quietly, as an intriguing model of resistance.
A fascinating and absorbing read about how the Amish choose their technology very deliberately and in a manner that enhances the community.
😈 Why Is It So Hard to Be Evil in Video Games? [Simon Hill, WIRED]
“I think many people find it hard to make evil choices in games, and gravitate toward the good,” Shafer says. “I think this is because most people find it hard to enjoy being cruel or evil.”
Those who did choose the evil route fell in line with Albert Bandura’s notion of moral disengagement, which is when people suspend their usual ethics to act against their moral standards without guilt or shame. But Shafer says several asserted they did what they did "because it was ‘just a game,’ and so the act had no real moral weight.” Other common justifications include the idea of only following orders, adhering to the game rules, and doing what's necessary to survive or complete the mission.
I enjoy psychology, morality, and ethics, as much as I enjoy understanding the impact of technology on our lives. This article serves up a little bit of all of these topics.
🚸 The child safety problem on platforms is worse than we knew [Casey Newton, Platformer]
In last week’s issue, I linked to an article urging tech companies to stop spying on kids. In his newsletter, Platformer, Casey Newton this week, shared a report that found that more young kids are using tech platforms than we suspected — and how they’re having sexual interactions with adults in huge numbers.
The report from Thorn, a nonprofit organization that builds technology to defend children from sexual abuse, identifies a disturbing gap in efforts by Snap, Facebook, YouTube, TikTok, and others to keep children safe. Officially, children are not supposed to use most apps before they turn 13 without adult supervision. In practice, though, the majority of American children are using apps anyway. And even when they block and report bullies and predators, the majority of children say that they are quickly re-contacted by the same bad actors — either via new accounts or separate social platforms.
+ More than 40 attorneys general urge Facebook to stop plans for an Instagram for kids [Samantha Murphy Kelly, CNN Business]
On Monday, 44 attorneys general signed a letter addressed to Facebook (FB) CEO Mark Zuckerberg, urging him to scrap plans for an Instagram intended for younger users, citing mental health and privacy concerns. The letter comes less than a month after child safety groups and Congress expressed similar concerns.
🎩 Hat tip to Beatrice for sending this in. Beatrice makes eco-friendly charcoal briquettes in Kenya.
🎧 Google invents a new tool that can make you hear colour [Johnny Wood, Big Think]
Kandinsky had synesthesia, where looking at colors and shapes causes some with the condition to hear associated sounds. With the help of machine learning, virtual visitors to the Sounds Like Kandinsky exhibition, a partnership project by Centre Pompidou in Paris and Google Arts & Culture, can have an aural experience of his art.
I often wonder why we assume that if the majority of us perceive the world in a certain way, then that is the truth, and it’s the minority that has a defect.
What if Kandinsky is the one who saw (heard) the world as it truly is, and it’s the rest of us that are disabled because we are unable to listen to colours?
Quote of the week
“The Amish adopt technology selectively, hoping that the tools they use will build community rather than harm it.”
—Donald Kraybill, professor of Anabaptist and Pietist Studies, in The Riddle of Amish Culture (1989), from the article “To be more tech-savvy, borrow these strategies from the Amish” [Psyche]
I wish you a brilliant day ahead :)
Neeraj