Video conferencing can tank collective intelligence / Humans + Tech - #76
+ Neuralink's monkey experiment raises questions from scientists and tech ethicists + Facebook’s ad algorithms are still excluding women from seeing jobs + Other interesting articles
Hi,
I watched the documentary Coded Bias on Netflix this week. If you haven’t yet watched it, please do. The documentary is eye-opening and scary, bringing to light the various issues around bias in algorithms, particularly facial recognition algorithms, and the harm they bring to people of colour in particular and human society in general.
It’s incredibly inspiring as well to see the journey of Joy Buolamwini, a Ghanian-American, who stumbled upon the bias in facial recognition algorithms in a class project at MIT and persevered to find the roots of this bias, eventually leading her to launch the Algorithmic Justice League (AJL) and address the US Congress on the dangers of algorithmic bias.
Onto this week’s articles.
Video conferencing can tank collective intelligence
In Humans + Tech - Issue #69, I linked to an article about Zoom fatigue that listed among the reasons for fatigue, video requiring a higher cognitive load, everyone looking at you constantly, and constantly staring back at yourself.
Researchers at Carnegie Mellon University have conducted an interesting study that shows that video conferencing can actually reduce collective intelligence [Carnegie Mellon, Futurity].
The researchers focused their study on two forms of synchrony (when two or more nonverbal behaviours are aligned): Facial expression synchrony—the perceived movement of facial features, and prosodic synchrony—the intonation, tone, stress, and rhythm of speech. Two groups were tested. In both groups, people were split into pairs and physically separated. One group had only audio capabilities to communicate, while the second group had audio and video communication capabilities.
The groups with video access did achieve some form of collective intelligence through facial expression synchrony, suggesting that when video is available, collaborators should be aware of these cues. However, the researchers found that prosodic synchrony improved collective intelligence whether or not the group had access to video technology and that this synchrony was enhanced by equality in speaking turns.
Most strikingly, though, was that video access dampened the pairs’ ability to achieve equality in speaking turns, meaning that using video conferencing can actually limit prosodic synchrony and therefore impede upon collective intelligence.
The results are counter-intuitive and surprising. Two studies showing that audio-only communication can be more productive than video communication. And that audio-only communication may result in better collaborative problem-solving.
Neuralink's monkey experiment raises questions from scientists and tech ethicist
Elon Musk’s Neuralink recently demoed a monkey playing pong with his mind [YouTube] through Neuralink devices implanted in his brain. Although experiments like this have been done as far back as 2002, Neuralink’s technology is far more advanced.
Although the primary goal of Neuralink is to create a device that can help people with movement and memory problems, ethicists are concerned about the social implications of mind-reading devices [Sissi Cao, Observer].
And even if Musk’s company succeeds on the tech front, the broader social implications of a mind-reading brain device is complicated.
“While I’m excited about the therapeutic applications of brain chips for those with movement and memory problems, I worry about the widespread use of brain chips in the future,” Schneider told Observer in an email.
“Without proper regulations, your innermost thoughts and biometric data could be sold to the highest bidder,” she added. “People may feel compelled to use brain chips to stay employed in a future in which AI outmodes us in the workplace.”
In Humans + Tech - Issue #10, I linked to an article about NextMind [WIRED] - which allows humans to control compatible software through a noninvasive neural interface that sits on the back of one’s head and translates brain waves into data.
Although scientists in the Observer article linked above think that accurate mind-reading consumer devices are not likely soon, technologies like this have a habit of accelerating suddenly when the key technical hurdles have been overcome. I think we are close to that point.
In Humans + Tech - Issue #70, I linked to an article about how AI can read brain data and generate images that are attractive to that particular individual [ScienceDaily]. The commercialisation of this technology is not far off, in my opinion.
With all the big tech companies and many startups focussing on VR technology, with the primary device to immerse yourself in VR being headsets, you can bet that they are all researching how to read your brainwaves and translate your thoughts into actions.
I fully support developing these technologies for helping humans, especially those who are disabled, who will benefit the most. But the privacy repercussions are dire. Governments need to start understanding the implications of these technologies and start introducing privacy laws around the use of brain data immediately. Or else, the privacy issues we currently experience on social media and the marketing analytics that track our every move on the internet will look like child’s play.
Facebook’s ad algorithms are still excluding women from seeing jobs
Independent researchers at the University of Southern California conducted an audit on Facebook’s ad algorithms. They discovered that the algorithms are gender-biased and show different ads to men and women, even though the qualifications required are the same [Karen Hao, MIT Technology Review].
The researchers registered as an advertiser on Facebook and bought pairs of ads for jobs with identical qualifications but different real-world demographics. They advertised for two delivery driver jobs, for example: one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). There are currently more men than women who drive for Domino’s, and vice versa for Instacart.
Though no audience was specified on the basis of demographic information, a feature Facebook disabled for housing, credit, and job ads in March of 2019 after settling several lawsuits, algorithms still showed the ads to statistically distinct demographic groups. The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men.
The researchers found the same pattern with ads for two other pairs of jobs: software engineers for Nvidia (skewed male) and Netflix (skewed female), and sales associates for cars (skewed male) and jewelry (skewed female).
Every week it seems, there is an article that highlights an example of algorithmic bias, mostly among big tech companies. These are the companies that have the most resources to combat this issue, and if they cannot be bothered to so, they need to be regulated. The impacts on society are far too dangerous for governments to continue to ignore.
+ Revealed: the Facebook loophole that lets world leaders deceive and harass their citizens [Julia Carrie Wong, The Guardian]
When world leaders and politicians can use Facebook to deceive the public and harass opponents, what incentive do they have to regulate them?
Facebook’s policies require all accounts to be held by real individuals, and they enforce the rule that each person can only have one account. However, this policy does not exist for pages. Prominent politicians are using this loophole to create fake pages that promote their propaganda on Facebook.
The Guardian has seen extensive internal documentation showing how Facebook handled more than 30 cases across 25 countries of politically manipulative behavior that was proactively detected by company staff.
The investigation shows how Facebook has allowed major abuses of its platform in poor, small and non-western countries in order to prioritize addressing abuses that attract media attention or affect the US and other wealthy countries. The company acted quickly to address political manipulation affecting countries such as the US, Taiwan, South Korea and Poland, while moving slowly or not at all on cases in Afghanistan, Iraq, Mongolia, Mexico and much of Latin America.
“There is a lot of harm being done on Facebook that is not being responded to because it is not considered enough of a PR risk to Facebook,” said Sophie Zhang, a former data scientist at Facebook who worked within the company’s “integrity” organization to combat inauthentic behavior. “The cost isn’t borne by Facebook. It’s borne by the broader world as a whole.”
The issue of privacy is important on an individual level, but it’s far more crucial on a societal level. When companies like Facebook, which have data on billions of people, can sell that data to the highest bidder and allow them to manipulate and deceive entire populations and nations, we need to take action. You have the power. Delete Facebook.
Other interesting articles from around the web
🤖 Study: People trust the algorithm more than each other [Tristan Greene, The Next Web]
A study by the University of Georgia showed that people trust algorithms more than other people as tasks get more difficult.
In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.
This is not a good sign. It shows that people implicitly trust algorithms even when they are not suited to providing solutions to certain types of problems.
👨🏭 Amazon’s new algorithm will spread workers’ duties across their muscle-tendon groups [Thomas Macaulay, The Next Web]
Amazon does not have the best reputation for employee treatment, especially in recent weeks. Amazon has now announced that they are using an algorithm to move employees between different jobs that use different muscle-tendon groups to decrease injuries from repetitive motion.
Jeff Bezos unveiled the system in his final letter as CEO to Amazon shareholders:
We’re developing new automated staffing schedules that use sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion and help protect employees from MSD [musculoskeletal disorder] risks. This new technology is central to a job rotation program that we’re rolling out throughout 2021.
🥴 AI is increasingly being used to identify emotions – here’s what’s at stake [Alexa Hagerty and Alexandra Albert, The Conversation]
Once again, emotion recognition technology (ERT) is based on biased algorithms. Yet, this technology is being used to test customer reactions, hiring, security, police work, and education.
Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.
This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.
Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Although the study of racial bias in ERT is small, racial bias in other forms of facial recognition is well-documented.
Quote of the week
“While I’m excited about the therapeutic applications of brain chips for those with movement and memory problems, I worry about the widespread use of brain chips in the future. Without proper regulations, your innermost thoughts and biometric data could be sold to the highest bidder. People may feel compelled to use brain chips to stay employed in a future in which AI outmodes us in the workplace.”
—Susan Schneider, cognitive psychologist and philosopher, from the article, “Neuralink's monkey experiment raises questions from scientists and tech ethicist” [Observer]
I wish you a brilliant day ahead :)
Neeraj