AI can now manipulate human behaviour / Humans + Tech - #67
+ Hackers try to contaminate Florida town's water supply through computer breach + AI-powered healthcare isn’t without pitfalls, but its potential is vast + Other interesting articles
I know. That’s not a very reassuring subject line. And when we hear the word ‘manipulate,’ it usually comes with a negative connotation. But let’s be optimistic and remember that manipulation can also be used for good. Like any tool, the capabilities of AI can be used for both good and bad.
AI can now learn to manipulate human behaviour
As humans, we have vulnerabilities in the ways that we make choices. A team of researchers at CSIRO’s (Australia's national science research agency) Data61 (the data and digital arm of CSIRO), developed an AI to find and exploit these vulnerabilities [Jon Whittle, The Conversation].
They performed three experiments to test the AI’s capabilities in manipulation. The AI performed well in all three, successfully manipulating human behaviour to achieve the prescribed aim.
The third experiment consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximise how much money it ended up with, and in the other the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.
In each experiment, the machine learned from participants’ responses and identified and targeted vulnerabilities in people’s decision-making. The end result was the machine learned to steer participants towards particular actions.
The technology can also be used in good ways by learning our vulnerabilities and alerting us to decisions that could lead to bad outcomes. It also provides valuable insight into behavioural sciences, helping us learn about how we make decisions. And they can even be trained to detect and alert us when we are being manipulated by deceptive marketing.
As AI's capabilities keep increasing, there is a need for some sort of oversight or regulation to ensure that it is used ethically. CSIRO has considered that and developed an AI ethics framework.
Like any technology, AI can be used for good or bad, and proper governance is crucial to ensure it is implemented in a responsible way. Last year CSIRO developed an AI Ethics Framework for the Australian government as an early step in this journey.
+ Liesl Yearsley, CEO of Cognea from 2007 - 2014, a company that offered a platform to build virtual agents, wrote an article in the MIT Technology Review in 2017 - We need to talk about the power of AI to manipulate humans. It’s a good read too.
Hackers try to contaminate Florida town's water supply through computer breach
If an employee at a water facility serving Oldsmar, a town near Tampa, Florida, hadn’t noticed that someone had taken over his computer and was remotely manipulating it, 15,000 people could have been poisoned [Christopher Bing, Reuters].
“The guy was sitting there monitoring the computer as he’s supposed to and all of a sudden he sees a window pop up that the computer has been accessed,” Gualtieri said. “The next thing you know someone is dragging the mouse and clicking around and opening programs and manipulating the system.”
The hackers then increased the amount of sodium hydroxide, also known as lye, being distributed into the water supply. The chemical is typically used in small amounts to control the acidity of water, but at higher levels is dangerous to consume.
The hackers accessed the computer through third-party software installed for remote tech support. Cybersecurity is now a major threat to people, countries, and governments around the world.
Future wars will be fought online, and like the SolarWinds hack [Laura Hautala, CNET] that came to light in recent months, there is going to be a fine line between classifying these acts as espionage or as an act of war. Accurately identifying the parties responsible for these hacks can also be very challenging, complicating retaliation and responses.
AI-powered healthcare isn’t without pitfalls, but its potential is vast
I discussed the dramatic increase in the use of Telehealth and Telemedicine in Issue #27.
AI is expanding its reach in healthcare. Although there are some drawbacks, overall it is proving to be beneficial in many ways, from providing human-level or better diagnoses to freeing doctors from routine tasks so that they can focus on their most important work to providing expert health care to regions of the world that currently don’t have access to it [Michael Wooldridge, Fast Company].
Second, the idea that we have a choice between dealing with a human physician or an AI healthcare program seems to me to be a first-world problem. For many people in other parts of the world, the choice may instead be between healthcare provided by an AI system or nothing. AI has a lot to offer here. It raises the possibility of getting healthcare expertise out to people in parts of the world who don’t have access to it at present. Of all the opportunities that AI presents us with, it is this one that may have the greatest social impact.
Wearables like the Apple Watch and Fitbit provide various health tracking capabilities like an in-built ECG, heart rate monitoring, and intelligent apps that can analyze the data from devices and identify health risks earlier. But technology always brings about with it other challenges. And often, those challenges involve giving up our privacy.
For example, in September 2018, the U.S.-based insurance company John Hancock announced that in the future, it will only offer insurance policies to individuals who are prepared to wear activity-tracking technology. The announcement was widely criticized.
I agree with Michael’s conclusion that AI’s increasing presence is more positive than negative when it comes to healthcare.
Other interesting articles from around the web
📱 Smartphone app to change your personality [Science Daily]
In another example of machines manipulating humans, an international research team in Switzerland developed an app that can modify personality traits within three months. The participants and their friends reported the changes even persisted three months after they stopped using the app.
👩💻 How your social media data can become a ‘mental health X-ray’ [Stephen Johnson, Big Think]
Psychiatrists have mostly relied on self-reported data and observations from friends and family to identify mental illnesses. A recent study by researchers at Feinstein Institutes and IBM suggests that algorithms can analyse social media activity to recognise mental illness with statistical accuracy.
Researchers at the University of Copenhagen conducted a study to show that AI can predict with up to 90 percent certainty, whether an uninfected person who is not yet infected will die of COVID-19 or not if they are unfortunate enough to become infected. For patients admitted to hospital, the AI can predict with 80 percent accuracy whether the person will need a respirator.
Quote of the week
AI is growing up, and will be shaping the nature of humanity. AI needs a mother.
—Liesl Yearsley, from the article “We need to talk about the power of AI to manipulate humans” [MIT Technology Review]
I wish you a brilliant day ahead :)