Robot paramedic carries out CPR in ambulance in UK first / Humans + Tech - #83
+ Report finds startling disinterest in ethical, responsible use of AI among business leaders + AI-powered drone deployed in Libya has possibly killed people without any human interference + More
Hi,
Article #2 and #3 below don’t paint a rosy picture. As humans, it seems we are putting too much trust in AI, and business leaders are disinterested in ensuring that the systems they use are ethical and responsible.
Robot paramedic carries out CPR in ambulance in UK first
When humans administer cardiopulmonary resuscitation (CPR), they get fatigued relatively quickly, affecting the quality of CPR they can deliver. LUCAS 3 is a mechanical system that can administer high-quality CPR consistently without a break. South Central Ambulance Service, an NHS ambulance service in the UK for four counties, is the first to take LUCAS 3 onboard its vehicles [E&T Editorial staff, The Institution of Engineering and Technology].
The system uses wireless Bluetooth connectivity, allowing it to configure the compression rate, depth, and alerts specific to an organisation’s resuscitation guidelines. Paramedics can also collect and review data post-event and share this information with other clinicians, its developers said.
Dr John Black, medical director at SCAS, said: “We know that delivering high quality and uninterrupted chest compressions in cardiac arrest is one of the major determinants of survival to hospital discharge, but it can be very challenging for several reasons.
“People can become fatigued when performing CPR manually which then affects the rate and quality of compressions, and patients may need to be moved from difficult locations, such as down a narrow flight of stairs, or remote places, which impedes the process.”
The NHS already uses LUCAS in emergency departments and ICUs. If more paramedics can use LUCAS in their ambulances, it should dramatically increase rates of survival.
Report finds startling disinterest in ethical, responsible use of AI among business leaders
In some alarming news, a report from FICO and Corinium finds that companies are deploying AI throughout their businesses with little concern or regard for potential problems and their ethical implications [Jonathan Greig, ZDNet].
There have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients to recidivism calculators used by courts that skew against certain races.
Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use.
The survey, conducted in February and March, features the insights of 100 AI-focused leaders from the financial services sector, with 20 executives hailing from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions.
Out of the executives surveyed:
70% were unable to explain how the AI made predictions.
78% said that they were not adequately equipped to ensure the ethical implications of new AI systems.
80% said they had difficulty convincing other executives to consider and prioritise ethical AI usage practices.
65% said their processes were ineffective in ensuring regulatory compliance of AI projects.
77% agreed that AutoML technology could be misused.
It’s disheartening to see that business executives are so nonchalant about the use of AI within their organisations. Especially when so many AI systems continue to have high levels of bias.
It is also a complex problem to solve. AI systems continue to deploy globally at a rapid pace, while citizens of each country are dependent on their respective governments to bring in some regulation to ensure ethical uses of AI. Governments are reacting much slower than the speed at which AI use is increasing worldwide, affecting all aspects of our lives.
AI-powered drone deployed in Libya has possibly killed people without any human interference
Speaking of unethical uses of AI, a UN Panel of Experts on Libya has published a report in which they say a “lethal autonomous weapon” was used for hunting targets without any human intervention [Tefo Mohapi, iAFrikan].
It has been revealed that an Artificial Intelligence-powered military drone was able to identify and attack human targets in Libya. The drone, Kargu-2, is made by a Turkish company (STM) and fitted with a payload that explodes once it makes an impact or is in close proximity with its AI-identified target.
It is not clear whether the attacks resulted in any deaths.
The revelations were made in a report published in March 2021 by the United Nations (UN) Panel of Experts on Libya which stated that the drone was a “lethal autonomous weapon” which had “hunted down and remotely engaged” soldiers which are believed to have been loyal to Libya’s General Khalifa Haftar.
This is a dangerous turn of events if militaries are now deploying completely autonomous weapons with no human control. Every week it seems that more and more science fiction becomes a reality.
AI still hasn’t proven reliable to a level that it can be deployed to work autonomously in weaponry. Not to mention the problems of bias that most algorithms typically harbour.
Other interesting articles from around the web
⏳ Virtual reality warps your sense of time [Allison Arteaga Soergel, UC Santa Cruz]
When time goes faster than you think, it’s known as “time compression.” A study at the UC Santa Cruz found that when participants played in virtual reality, they played for an average of 28.5% more time when estimating a playtime of five minutes compared to traditional formats of playing.
“I stopped playing the game, and I realized that I had no idea how much time had passed,” he recalled. “I was supposed to be taking turns with other people, and I was worried that I had played for too long because I couldn't even guess if it had been 10 minutes or 40 minutes.”
🎨 Bacteria get a fresh gig as art restorers in Italy [Mary Beth Griggs, The Verge]
Bacteria have a generally bad reputation because we always associate bacteria with infection. However, there are lots of good and useful bacteria. Now, humans are getting the help of bacteria to restore centuries-old art to pristine condition.
The team selected specialized strains of bacteria to target different stains on the marble. Some types of bacteria can thrive in harsh environments and are adapted to eating things that can cause humans problems. These bacteria can break down things like pollutants into relatively harmless components.
In this case, the team looked for bacterial strains that would eat away at the stains and other gunk, without harming the marble itself, and tested their top choices on an unobtrusive patch of marble behind an altar in the chapel. They found a few types that would work, and used gel to spread them across the statues. The different strains of bacteria ate away at residues, glue, and even the stains from an improperly disposed-of corpse that was dumped in one of the tombs in 1537.
More examples of how Italy is using bacteria in other places at the link. It’s time to stop being anti-bacteria 😄.
🚚 A self-driving truck got a shipment cross-country 10 hours faster than a human driver [Vanessa Bates Ramirez, Singularity Hub]
A heavy-duty self-driving truck drove autonomously 80% of the way from Arizona to Oklahoma, completing the journey in 14 hours and 6 minutes [Vanessa Bates Ramirez, Singularity Hub]. Humans usually take 24 hours and 6 minutes for the same route. The watermelons that the truck was transporting arrived in better shape too.
The reason the watermelons were in better shape was because they were a day fresher. This is one angle TuSimple is hoping will boost its business. “We believe the food industry is one of many that will greatly benefit from the use of TuSimple’s autonomous trucking technology,” said Jim Mullen, the company’s chief administrative officer. “Given the fact that autonomous trucks can operate nearly continuously without taking a break means fresh produce can be moved from origin to destination faster, resulting in fresher food and less waste.”
TuSimple aims to have fully autonomous trucks (no safety drivers required onboard) by the end of 2024.
Quote of the week
“Many don't understand that your model is not ethical unless it's demonstrated to be ethical in production. It's not enough to say that I built the model ethically and then I wash my hands of it. What we're missing today is honest and straight talk about which algorithms are more responsible and safe.”
—FICO CAO Scott Zoldi, from the article, “Report finds startling disinterest in ethical, responsible use of AI among business leaders” [ZDNet]
I wish you a brilliant week ahead :)
Neeraj