'Every smile you fake' — an AI emotion-recognition system can assess how 'happy' China's workers are in the office / Humans + Tech - #85
+ Tech companies are training AI to read your lips + Getting the feels: Should AI have empathy? + Other interesting articles from around the web
Hi,
I hope you had a great week. Let’s dive into this week’s articles.
'Every smile you fake' — an AI emotion-recognition system can assess how 'happy' China's workers are in the office
Taigusys is one of many Chinese companies that have developed software that they claim can detect emotions from facial expressions of people and track how they are feeling [Cheryl Teh, Insider]. Taigusys has many multinational companies on their client list, but none confirmed if they use this particular technology.
Human rights activists and tech experts Insider spoke to sounded the alarm bell about deploying such programs, noting that emotion-recognition systems are fundamentally rooted in unethical, unscientific ideas.
Vidushi Marda, senior program officer at British human rights organization Article 19, and Shazeda Ahmed, Ph.D. candidate at the UC Berkeley School of Information, told Insider that their joint research paper on China's emotion-recognition market published this January uncovered a staggering 27 companies in China that were working on developing emotional recognition programs, including Taigusys.
"If such technology is rolled out, it infringes on the ethical and legal rights of employees in the workplace. Even in the premises of a privately-owned workplace, there's still an expectation of privacy and dignity, and the right of employees to act freely and think freely," Marda added.
Emotion recognition is deeply flawed, but this technology is being sold and deployed by various companies in many regions worldwide. The technology is used in education, health care, employment, and criminal justice. As a result, there is an urgent call to regulate emotion-recognition technologies [Kate Crawford, nature].
Researchers at the University of Cambridge created a game at emojify.info to demonstrate how flawed these systems are [James Vincent, The Verge]. Making life-changing decisions in areas such as education, health care, employment, and justice based on such faulty and flawed algorithms should be outlawed or heavily regulated.
Tech companies are training AI to read your lips
Lip reading is following facial recognition and emotion recognition as one of the many tools that companies and governments are employing to increase their surveillance of society. Of course, like all technologies, this has both good and bad uses. But in the hands of data-hungry companies and governments worldwide increasingly implementing monitoring technologies to monitor their citizens, this feels like it will be more harmful than helpful [Todd Feathers, VICE].
Surveillance company Motorola Solutions has a patent for a lip-reading system designed to aid police. Skylark Labs, a startup whose founder has ties to the U.S. Defense Advanced Research Projects Agency (DARPA), told Motherboard that its lip-reading system is currently deployed in private homes and a state-controlled power company in India to detect foul and abusive language.
“This is one of those areas, from my perspective, which is a good example of ‘just because we can do it, doesn’t mean we should,’” Fraser Sampson, the UK’s biometrics and surveillance camera commissioner, told Motherboard. “My principal concern in this area wouldn’t necessarily be what the technology could do and what it couldn’t do, it would be the chilling effect of people believing it could do what it says. If that then deterred them from speaking in public, then we’re in a much bigger area than simply privacy, and privacy is big enough.”
Context plays a big part in understanding conversations. When lip reading is automated only to get alerted when specific keywords are detected, it could result in people being wrongfully profiled and harassed.
Getting the feels: Should AI have empathy?
In this podcast episode of McKinsey on AI, David DeLallo speaks with marketing and technology author Minter Dial. Here’s my favourite part of the conversation. It’s a very insightful conversation [David DeLallo, McKinsey & Company].
David DeLallo: But even so, Minter reminded me that creating empathic AI systems requires more empathic employees, including the technical professionals who build these systems.
Minter Dial: When you are going to code empathy, you need to go to coders. You need to brief them. The challenge with coders is that they are typically more logically oriented, so you need to think through how you’re going to provide them with the material to create empathic code.
And today we know that so many of the data sets are naturally biased. So you need to think, “Ethically, we want this to be unbiased.” But if you want to write this ethical principle, it turns out that the skill you need to have in order to write the right ethical principles is empathy.
So even though at the end of the day I’m trying to create an empathic AI, first of all, you need to have an ethical construct. But before even having your ethical construct, be self-aware about your own levels of empathy. Do you have on your team enough diversity to represent what your ethics are? And then you’re going to be able to have a better type of data set and coding for your machine. If you want an empathic AI, you better start off with empathy within your organization.
Empathy is something that humans struggle with as well. Creating empathetic AI that is ethical is a massive challenge as Minter Dial points out that studies show that empathy is on the decline in society in general. It’s a fascinating conversation, and I urge you to click through and either listen to the podcast or read the transcript.
Other interesting articles from around the web
🧬 A Guide To Genetic Engineering Biotech And How It Works [Derek Iwasiuk, MyBioResource]
An in-depth article on Genetic engineering and its applications in food, animals, and humans. The article not only addresses the physical effects but also how the business aspects such as crop patents could affect us.
Genetic engineering is one of those interesting topics that can be intimidating for those who don’t already know about it. Genetic engineering isn’t just interesting, it’s the future for many biological and medicinal fields, and we can expect to reap the benefits of genetic engineering biotechnology in the coming decades. In this guide to genetic engineering biotech, we’ve explained this field of research in simple terms that everybody should be able to understand.
🦴 New technique grows realistic bone in a dish [Anna Goshua, Scientific American]
Researchers in the Netherlands have discovered a way to create human bones in the lab using organoids. The bone is one of the most challenging organs to recreate. Being able to create it in the lab is a big step towards understanding bone disorders.
Researchers could use this new tool to watch what happens at the molecular level when the building process goes wrong, causing bone disorders that affect tens of millions of people worldwide. One such disorder is osteogenesis imperfecta, or “brittle bone disease,” a genetic condition that weakens the extracellular matrix and can cause hundreds of spontaneous bone fractures over a person's lifetime. Bone cancers such as osteosarcoma also involve dysfunctional bone formation, and this model could explore how cancer cells infiltrate the extracellular matrix and make unwanted new bone.
🤷 Google News thinks I’m the queerest AI journalist on Earth [Tristan Greene, Neural, TNW]
Tristan Greene stumbled upon an interesting observation. Google’s algorithm associates him with queerness and AI. The link above contains the story of how he discovered this behaviour. The excerpt below is from the Neural email newsletter.
Apparently, I'm so queer that if you do a Google News search for "artificial intelligence queer," 30% of what the algorithm spits out was written by me.
And here's the fun part: most of those stories have nothing to do with 'queer' topics.
It's a good thing I am queer. You see, the algorithm apparently associates me with queerness because my TNW author profile lists "queer stuff" as one of my beats.
What if I wasn't queer? Who would I even call to get that fixed?
And even though I am, it still feels kind of crappy to know the algorithm reduces my existence to "queer, writes about AI."
Quote of the week
“One of the things the last 10 years in AI and [machine learning] have shown us is that there’s no way to predict the future in any meaningful way, but it’s really unwise to underestimate things.”
—Rodrigo Mira, a PhD candidate at Imperial College London, from the article, “Tech companies are training AI to read your lips” [VICE]
I wish you a brilliant day ahead :)
Neeraj