Researchers develop 'explainable' artificial intelligence algorithm / Humans + Tech - #77
+ AI comes to car repair, and body shop owners aren’t happy + This AI could help wipe out colon cancer + Other interesting articles
Hi,
There have been some very positive developments recently in the world of AI. The first article below talks about how researchers have created an explainable artificial intelligence algorithm—one that can explain how it arrived at its answers.
On the regulation side, both the US and the EU are looking at some form of regulation of AI.
The US FTC published a blog saying that they plan on going after companies using and selling biased algorithms [Elisa Jillson, FTC].
The EU has also proposed draft regulations that will restrict the use of AI in applications such as social credit surveillance, gender-biased algorithms, flawed facial recognition, those that manipulate our behaviour, etc. The proposed laws are not perfect, and there are people on both sides upset, but it’s the right step in the direction of regulating AI [Bernd Carsten Stahl, The Conversation].
Let’s dive in.
Researchers develop 'explainable' artificial intelligence algorithm
In a significant advancement on the positive side for AI, researchers at the University of Toronto and LG AI Research have developed an "explainable" artificial intelligence (XAI) algorithm [Matthew Tierney, University of Toronto, TechXplore]. The major issue with AI algorithms has been a lack of explainability, meaning the algorithms cannot convey what rationale they used to arrive at the answers and neither can the creators of these algorithms decipher the decision-making steps the algorithms went through.
Algorithms are being integrated into processes that significantly affect our lives, such as deciding who gets healthcare, what your insurance premium should be, in courts of law, in human resource departments, and in education, among others. Almost all algorithms have some form of bias that they inherit from the data they are trained on. It is, therefore, crucial to have explainable AI algorithms to ensure the algorithm used the right reasoning in arriving at a decision, especially when it comes to decisions that can have a huge impact on human lives, such as those listed above.
In a black box model, a computer might be given a set of training data in the form of millions of labeled images. By analyzing the data, the algorithm learns to associate certain features of the input (images) with certain outputs (labels). Eventually, it can correctly attach labels to images it has never seen before.
The machine decides for itself which aspects of the image to pay attention to and which to ignore, meaning its designers will never know exactly how it arrives at a result.
But such a "black box" model presents challenges when it's applied to areas such as health care, law and insurance.
"For example, a [machine learning] model might determine a patient has a 90 percent chance of having a tumor," says Sudhakar. "The consequences of acting on inaccurate or biased information are literally life or death. To fully understand and interpret the model's prediction, the doctor needs to know how the algorithm arrived at it."
In contrast to traditional machine learning, XAI is designed to be a "glass box" approach that makes the decision-making transparent. XAI algorithms are run simultaneously with traditional algorithms to audit the validity and the level of their learning performance. The approach also provides opportunities to carry out debugging and find training efficiencies.
The researchers developed this explainable algorithm in one used to help identify and eliminate defects in display screens. They hope that they can extend their learnings from this to other algorithms as well. I’m really excited about this development.
AI comes to car repair, and body shop owners aren’t happy
Covid has accelerated the use of AI in insurance settlements. While only 15% of US auto claims were settled using photos before the pandemic, that number has risen to 60%. It’s expected to reach 80% by 2025. Insurers have been investing heavily in AI and have trained algorithms in image classification using millions of photos of damaged cars from multiple makes and models. Body shop owners are not happy [Aarian Marshall, WIRED].
Tractable, a company that uses computer vision and machine learning to build algorithms for insurance companies, says 25 percent of its estimates are so on-the-nose, they don’t need human intervention. The company wants to get that figure to 75 percent by the end of the year, says Alex Dalyac, Tractable’s CEO and cofounder.
One group not happy with the results: body-shop owners. “I'd say 99.9 percent of the estimates are incorrect,” says Jeff McDowell, who owns Leslie's Auto Body in Fords, New Jersey. “You can’t diagnose suspension damage or a bent wheel or frame misalignment from a photograph.”
Repair shop owners say they’re spending much more time haggling with insurance companies to determine the correct price for a repair—time for which they’re not compensated. In some cases, that means damaged vehicles are stuck in the shop for longer than usual.
For the insurance companies, not having to visit each accident vehicle for a manual inspection saves them time and money. They can process many more claims in a day using the estimates that the AI systems produce from photos.
I sympathize with the repair shop owners as they rightfully point out that photos do not show internal damage. Perhaps, as the AI gets better once trained with better data such as which photos correlate with internal damage, these estimates will be closer to actual repair costs.
This AI could help wipe out colon cancer
Doctors who perform colonoscopies are human, and they sometimes make mistakes. Some polyps are hard to see in a colonoscopy and are easily missed. In other cases, the fatigue of doctors who have to perform back-to-back procedures can sometimes lead to errors. The US FDA has recently approved an AI called GI Genius created by Medtronic that can aid doctors in identifying polyps during colonoscopies [Sara Harrison, WIRED].
The system, which can be added to the scopes that doctors already use to perform a colonoscopy, follows along as the doctor probes the colon, highlighting potential polyps with a green box. GI Genius was approved in Europe in October 2019 and is the first AI cleared by the FDA for helping detect colorectal polyps. “It found things that even I missed,” says Wallace, who co-authored the first validation study of GI Genius. “It's an impressive system.”
Mark Pochapin, a gastroenterologist at NYU Langone who was not involved in creating GI Genius, says it makes sense that AI would be good at recognizing polyps. “There is less diversity when you’re looking at polyps,” says Pochapin. The millions of colonoscopy videos provide plenty of data to make the algorithm comprehensive. That should shield the system from concerns about bias in other health care algorithms. “There are only so many varieties of polyps,” he says.
In my opinion, implementations of AI where the system is augmenting doctors and preventing human errors are much better ways of making use of AI instead of systems where the AI replaces the diagnosis process completely. In this way, both human and machine make up for each other’s shortcomings. The danger here is if the doctors start relying completely on the AI system to identify all the polyps and stop attempting to do so themselves.
Other interesting articles from around the web
🤖 AI ethicist Kate Darling: ‘Robots can be our partners’ [Zoë Corbyn, The Guardian]
Kate Darling, an MIT Media Lab researcher and expert on tech ethics, is worried that the way we talk and think about robots today leaves us open to being manipulated and taken advantage of. She says that instead of comparing robots and AI to human intelligence, which limits our imaginations, we should treat robots and AI the same way we treat animals to help us think creatively about how to use them to help humans flourish.
“Animals and robots aren’t the same, but the analogy moves us away from the persistent robot-human one,” Darling told The Guardian. “It opens our mind to other possibilities — that robots can be our partners — and lets us see some of the choices we have in shaping how we use the technology.”
⚫️ FTC reviewing how dark patterns may affect consumers [Roger Montti, Search Engine Journal]
Dark patterns are user interfaces that try to manipulate or deceive users into taking an action against their best interests. The US FTC has announced a workshop on Dark Patterns that could lead to more regulations on tech companies to stop consumer manipulation.
What the FTC is looking into encompasses more than just user privacy. They will examine how consumers make purchases online and the cognitive tricks companies use to nudge them into acting against their best interests. This has the potential to disrupt how businesses sell online.
💬 Pepper the robot talks to itself to improve its interactions with people [Science Daily]
Ever wondered why your virtual home assistant doesn't understand your questions? Or why your navigation app took you on the side street instead of the highway? In a study published April 21st in the journal iScience, Italian researchers designed a robot that "thinks out loud" so that users can hear its thought process and better understand the robot's motivations and decisions.
This is another form of explainable AI which may make people trust robots more and make them easier to understand.
Quote of the week
“I worry that companies may try to take advantage of people who are using this very emotionally persuasive technology. For example, a sex robot exploiting you in the heat of the moment with a compelling in-app purchase. Similar to how we’ve banned subliminal advertising in some places, we may want to consider the emotional manipulation that will be possible with social robots.”
—Kate Darling, MIT Media Lab researcher, from the article, “AI ethicist Kate Darling: ‘Robots can be our partners’” [The Guardian]
I wish you a brilliant day ahead :)
Neeraj