Apps against sexual violence don’t work / Humans + Tech - #73
+ Researchers are helping artificial intelligence understand fairness + An AI created to argue with humans + Other interesting articles from around the web
There is a subtle change in this newsletter. Bragging rights to the first person who notices it. Reply to this email or leave a comment if you figure out what it is.
Onto this week’s articles.
Apps against sexual violence have been tried before. They don’t work.
A few days back in Australia, New South Wales Police Commissioner Mick Fuller suggested technology should be part of the solution to growing concerns around sexual assault [Kathryn Henne, Jenna Imad Harb, and Renee M. Shelby, The Conversation]. He did admit that it may be a bad idea, but it was one idea to generate conversation on how consent can be clearly communicated.
Apps have been implemented to address sexual harassment in various ways and in many different environments since 2011. Other than documenting consent, they also provide other features such as emergency assistance, information, and a way to report perpetrators. Despite all these capabilities, these apps do not work, and in some scenarios, can even work against the victims.
The major challenge around apps recording consent is that they fail when consent is withdrawn. And this initial consent can even work against the victim.
In the case of the proposed consent app, critics have noted that efforts to time-stamp consent fail to recognise consent can always be withdrawn. In addition, a person may consent out of pressure, fear of repercussions or intoxication.
If a person does indicate consent at some point but circumstances change, the record could be used to discredit their claims.
Other than failing to solve the problem of gaining true consent, these apps also endanger users by compromising their privacy and safety.
This is a societal and human values problem and not one that can be solved by technology.
There are other reasons why the consent app is a bad idea. It perpetuates misguided assumptions about technology’s ability to “fix” societal harms. Consent, violence and accountability are not data problems. These complex issues require strong cultural and structural responses, not simply quantifiable and time-stamped data.
Researchers are helping artificial intelligence understand fairness
Even humans find it difficult to agree on what is fair. Teaching AI to understand fairness is going to be a very challenging project [Matt Davenport, TechXplore]. But that’s what Michigan State University’s Pang-Ning Tan, a Computer Science and Engineering professor and Abdol-Hossein Esfahanian, the chair of Tan’s department and an expert in applied graph theory, have set out to do.
A conventional definition would look at fairness from the perspective of an individual; that is, whether one person would see a particular outcome as fair or unfair. It's a sensible start, but it also opens the door for conflicting or even contradictory definitions, Tan said. What's fair to one person can be unfair to another.
So Tan and his research team are borrowing ideas from social science to build a definition that includes perspectives from groups of people.
"We're trying to make AI aware of fairness and to do that, you need to tell it what is fair. But how do you design a measure of fairness that is acceptable to all," Tan said. "We're looking at how does a decision affect not only individuals, but their communities and social circles as well."
It’s a mighty challenge, and I hope they are successful. As AI is finding its way into every part of our lives and making decisions from who gets healthcare to what your insurance premiums should be to who should get a loan, teaching it how to make fair decisions is critical to building public trust in these systems.
An AI created to engage in debate with humans
“Argument Technology” is a phrase that I never considered until this week.
IBM has been working on Project Debater; an AI that can debate competitively with humans [Loukia Papadopoulos, Interesting Engineering]. In tests, Project Debater was given 15 minutes to research the topic. It was able to form both opening statements and counterarguments.
Although humans won the debates, it was able to change the minds of some people. Very impressive!
"Project Debater is a crucial step in the development of argument technology and in working with arguments as local phenomena. Its successes offer a tantalizing glimpse of how an AI system could work with the web of arguments that humans interpret with such apparent ease," Chris Reed writes in a critique of the new project published in Nature magazine.
"Given the wildfires of fake news, the polarization of public opinion and the ubiquity of lazy reasoning, that ease belies an urgent need for humans to be supported in creating, processing, navigating and sharing complex arguments — support that AI might be able to supply."
So far, Project Debater has not been released to the public. Dan Robitski of Futurism correctly points out that if people ever unleashed an algorithm like Project Debater through bots on social media, it would create an endless black hole of arguments online [Futurism].
Other interesting articles from around the web
💘 AI tries its skills at pickup lines [Janelle Shane, AI Weirdness]
Janelle Shane tested four variants of GPT-3, a language model from OpenAI that uses deep learning to produce human-like text. The results are hilarious, some hits but mostly misses – AI still has quite a way to go to help our love lives. Here’s one of the lines it came up with:
I'm losing my voice from all the screaming your hotness is causing me to do.
🥸 How Deepfakes could help implant false memories in our minds [Tristan Greene, The Next Web]
I just finished reading Nina Schick’s brilliant book, “Deep Fakes and the Infocalypse: What You Urgently Need To Know.” So this article immediately grabbed my attention.
What is crucial in this research is that the researchers were also able to remove false memories. Deepfakes present many problems to society, as both Nina Schick in her book and Tristan Greene in this article point out.
Just like the invention of the firearm made it possible for those unskilled in sword fighting to win a duel and the creation of the calculator gave those who struggle with math the ability to perform complex calculations, we may be on the cusp of an era where psychological manipulation becomes a push-button enterprise.
Quote of the week
"Algorithms are created by people and people typically have biases, so those biases seep in. We want to have fairness everywhere, and we want to have a better understanding of how to evaluate it."
—Abdol-Hossein Esfahanian, Professor at Michigan State University, from the article “Researchers are helping artificial intelligence understand fairness.” [Tech Xplore]
I wish you a brilliant day ahead :)