16-year old develops an app to help his sister talk / Humans + Tech - #87

+ AI learns to predict human behaviour from videos + How underground fibre optics spy on humans moving above + Other interesting stories


Two of the articles I read this week made me happy — Archer, a 16-year old who created an app for his non-verbal sister to communicate easier and how much effort and resources Twitter is putting towards researching ethical AI.

16-year old develops an app to help his sister talk

Most apps for assisting non-verbal people cost hundreds of dollars. Archer created an app and made it free for everyone [Harry Bligh and Dianne King, BBC]. I urge you to watch the full video (5 mins). It’s an inspiring story.

Della has a rare genetic condition called Bainbridge-Ropers Syndrome which affects her ability to speak. 

Her brother, Archer, wanted to help his sister communicate - and didn't think it was fair to rely on expensive communication apps. 

Aged just 16, Archer decided to develop one himself, for free and accessible to all.

The video below is from YouTube. You can also watch it on BBC by clicking the link above.

AI learns to predict human behaviour from videos

Humans can easily predict the actions of another human based on their body language and react and respond suitably. It’s been difficult for computers to do this. Researchers at Columbia University School of Engineering and Applied Science are tackling this by feeding their algorithm thousands of hours of footage from movies, sports, and TV shows. They are using a different approach to previous attempts, incorporating uncertainty into the algorithm [Holly Evarts, ScienceDaily].

Past attempts in predictive machine learning, including those by the team, have focused on predicting just one action at a time. The algorithms decide whether to classify the action as a hug, high five, handshake, or even a non-action like "ignore." But when the uncertainty is high, most machine learning models are unable to find commonalities between the possible options.

Columbia Engineering PhD students Didac Suris and Ruoshi Liu decided to look at the longer-range prediction problem from a different angle. "Not everything in the future is predictable," said Suris, co-lead author of the paper. "When a person cannot foresee exactly what will happen, they play it safe and predict at a higher level of abstraction. Our algorithm is the first to learn this capability to reason abstractly about future events."

This type of research is critical to future human-robot interactions.

How underground fibre optics spy on humans moving above

A team of researchers from Penn State University rigged an underground telecom fibre optic cable that runs two and a half miles across campus into a scientific surveillance device. Vibrations on the ground above deform the cable slightly, which allows the scientists to detect the activity on the ground. Each activity, such as people walking or a car driving by, has a unique seismic signature. The technique is called Distributed Acoustic Sensing, or DAS [Matt Simon, WIRED].

DAS could be a powerful tool to track people’s movement: Instead of sifting through cell phone location data, researchers could instead tap into fiber optic cables to track the passage of pedestrians and cars. But the technology can’t exactly identify a car or person. “You can say if it's a car, or if it's a truck, or it's a bike. But you cannot say, ‘Oh, this is a Nissan Sentra, 2019,’” says Stanford University geophysicist Ariel Lellouch, who uses DAS but wasn’t involved in this study but did peer-review it. “Anonymity of DAS is one of the biggest benefits, actually.”

This technique has other uses as well. Civil engineers are using it to study soil formation, while biologists are using it to study whales.

Other interesting stories from around the web

👁 This startup is paying people to scan their eyeballs [Edward Ongweso Jr, VICE]

On Tuesday, Bloomberg reported that it has learned that Sam Altman, CEO of OpenAI and former president of startup accelerator Y Combinator, is co-founding a cryptocurrency called Worldcoin that will try and convince people to scan their retinas with a large silver orb in order to receive tokens.

“I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better,” Altman told Bloomberg. “Is there a way we can use technology to do that at global scale?”

The intentions of Worldcoin may be good. I’m a big fan of universal basic income myself. But I wouldn’t trade my biometric information for that to a private company.

How Twitter hired tech's biggest critics to build ethical AI [Anna Kramer, Protocol]

When the leader of the team researching ethics and accountability for Twitter's machine learning (ML) left Twitter, Ari Font, manager of Twitter's machine learning platforms teams, stepped in and convinced Twitter’s leadership to focus on ethics and responsible ML.

Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.

So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.

Font has built a formidable team, including Rumman Chowdhury, Kristian Lum, and Sarah Roberts. It’s also great to see that Twitter’s leadership is actively listening to these experts, taking a keen interest in using AI and ML in ethical ways, and providing them with the resources they need. It’s a refreshing and welcome approach compared to what we’ve seen at Google and Facebook recently.

🎸 Robot rock: can big tech pick pop’s next megastar? [Alex Rayner, The Guardian]

Hazel Savage and Aron Pettersson built Musiio, a software that uses AI to scan thousands of songs to predict the next hit record.

The pair had just founded their firm, Musiio, in Singapore’s Boat Quay district. Pettersson, who is Swedish, was a specialist in artificial intelligence (AI) with a background in neuroscience; Savage, a British music industry professional with tech pedigree, had worked for Shazam and the Pandora streaming service. They let their software loose on the Free Music Archive, one of the world’s largest collections of copyright-free songs. These are written by little-known artists and commonly used for soundtracks and podcasts. They asked their computer to pick 20 songs from the archive, based on their similarity to a tune Savage liked: I Wanted Everything by the US indie star Kurt Vile. Back in the office, they listened. “Every song was great,” says Savage, “and every song was of a similar genre.”

With the ease of self-publishing music, aspiring musicians upload thousands of songs to various services every day. Humans can’t listen to all of these to pick out promising artists. Software like Musiio is helping to analyse thousands of hours of music to identify new talent.

Quote of the week

"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education. We're talking to people who wanted to do the right thing and didn't know how to do the right thing."

—Rumman Chowdhury, from the article, “How Twitter hired tech's biggest critics to build ethical AI” [Protocol]

I wish you a brilliant day ahead :)