How technology rewrites your diet / Humans + Tech - #60
+ This is the Stanford vaccine algorithm that left out frontline doctors + Algorithms Behaving Badly: 2020 Edition + Google told its scientists to 'strike a positive tone' in AI research - documents
Hi,
I hope you had a wonderful Christmas holiday :)
How technology rewrites your diet
We don’t often realise it, but technology hugely influences what we eat. From the way food is grown, packaged, stored, and distributed to the tools and techniques used to prepare it, technology has a big part to play in what ends up our bellies.
Your diet is determined in large part by your tastes and by the climate and culture where you live. But technology also plays a big role. Whether we look forward or back in time, we can see how new technologies change what and how we eat.
Such changes can come from many places: from engineers who develop a new kitchen appliance like the microwave or the multi-cooker, from food scientists who pioneer novel techniques to produce more plant-based foods, or from entrepreneurs who raise millions in venture capital funding to sell meal kits online.
MIT Technology Review asked various global experts for their opinion on which technologies are changing our food [Amy Nordrum, MIT Technology Review].
They highlighted new transportation infrastructure, personalized nutrition, precision agriculture, e-commerce, better fermentation methods, genetic modification, and new packaging materials, as the technologies most likely to have a significant impact.
I’ve highlighted an insight from one of the experts immediately below. I encourage you to click through to the article and read the opinions of all the other experts. Each of them is insightful.
Christine Gould
Founder and CEO, Thought for Food (Switzerland)
Per capita global food production has increased for decades. But having more food doesn’t mean people are better nourished. Diseases caused by unhealthy diets—such as obesity, diabetes, cancer, and cardiovascular disease—are the primary cause of mortality in much of the world.
One problem is that our scientific understanding of food is still rudimentary. At most, 150 biochemicals are listed in conventional nutrition databases. That’s a tiny fraction of the tens of thousands of compounds found in food. Some describe the many that remain unknown as “nutritional dark matter.”
I see potential in the emerging field of personalized nutrition, which aims to combine new knowledge about such compounds with insights from an individual’s own genetics and microbiome to deliver customized dietary guidelines and plans. The goal is a world in which people are not just fed, but nourished.
This is the Stanford vaccine algorithm that left out frontline doctors
Resident physicians of Stanford Medical Center, many on the front lines of Covid-19 were shocked to see only 7 out of 1,300 of them were on the priority list of the first 5,000 doses of the vaccine. The hospital blamed the error on a complex algorithm [Eileen Guo and Karen Hao, MIT Technology Review].
“Our algorithm, that the ethicists, infectious disease experts worked on for weeks … clearly didn’t work right,” Tim Morrison, the director of the ambulatory care team, told residents at the event in a video posted online.
Many saw that as an excuse, especially since hospital leadership had been made aware of the problem on Tuesday—when only five residents made the list—and responded not by fixing the algorithm, but by adding two more residents for a total of seven.
The authors of the article, Eileen Guo and Karen Hao analysed the algorithm. They found several discrepancies that led to errors.
The employee variables increase a person’s score linearly with age, and extra points are added to those over 65 or under 25. This gives priority to the oldest and youngest staff, which disadvantages residents and other frontline workers who are typically in the middle of the age range.
Job variables contribute the most to the overall score. The algorithm counts the prevalence of covid-19 among employees’ job roles and department in two different ways, but the difference between them is not entirely clear. Neither the residents nor two unaffiliated experts we asked to review the algorithm understood what these criteria meant, and Stanford Medical Center did not respond to a request for comment. They also consider the proportion of tests taken by job role as a percentage of the medical center’s total number of tests collected.
What these factors do not take into account is exposure to patients with covid-19, say residents. That means the algorithm did not distinguish between those who had caught covid from patients and those who got it from community spread—including employees working remotely. And, as first reported by ProPublica, residents were told that because they rotate between departments rather than maintain a single assignment, they lost out on points associated with the departments where they worked.
Jefferey Kahn, director of the Johns Hopkins Berkman Institute of Bioethics said the approach was unnecessarily overcomplicated. He explained that the more variables the algorithms have to weigh, the harder it becomes to understand.
After a protest by around 100 residents, Stanford issued a formal apology and promised to revise their distribution plan.
Algorithms behaving badly: 2020 edition
Whether we like it or not, algorithms and AI are an integral part of our lives now. Algorithms are now involved in or exclusively making decisions on who gets medical care, how much your insurance should cost, who gets admitted to college, what news you get served, what advertisements you see, which children enter foster care, as well as powering various invasive and privacy-violating surveillance systems.
The Markup has compiled a list of algorithmic failures for 2020 [The Markup].
But it’s been proven again and again that formulas inherit the biases of their creators. An algorithm is only as good as the data and principles that train it, and a person or people are largely in charge of what it’s fed.
Every year there are myriad new examples of algorithms that were either created for a cynical purpose, functioned to reinforce racism, or spectacularly failed to fix the problems they were built to solve. We know about most of them because whistleblowers, journalists, advocates, and academics took the time to dig into a black box of computational decision-making and found some dark materials.
I’ve covered many of these as they were reported during the year in previous issues. Reading about all them in one article gives you a different perspective on how algorithms are slowly embedding themselves in our lives.
In the list are:
Medical algorithms that are racially biased.
Google searches that exhibit racially bias.
Algorithms that make renters’ and lower-income people’s lives more difficult.
Predictive crime algorithms that lead to falsely misidentified people being monitored and harassed by the police.
Facial recognition algorithms led to false arrests and the targeting of people of specific ethnicities like the Uighurs in China.
Workplace surveillance algorithms monitoring and reporting on employee productivity.
Education algorithms grading students wrongly.
Google told its scientists to 'strike a positive tone' in AI research - documents
Following AI ethicist, Timnit Gebru’s firing from Google earlier this month (mentioned in Issue #57), Reuters has uncovered that Google tightened control over its scientists’ papers by launching a “sensitive topics” review, and in at least three cases requested authors refrain from casting its technology in a negative light [Paresh Dave, Jeffrey Dastin, Reuters].
Studying Google services for biases is among the “sensitive topics” under the company’s new policy, according to an internal webpage. Among dozens of other “sensitive topics” listed were the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content.
The Google paper for which authors were told to strike a positive tone discusses recommendation AI, which services like YouTube employ to personalize users’ content feeds. A draft reviewed by Reuters included “concerns” that this technology can promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content,” as well as lead to “political polarization.”
The final publication instead says the systems can promote “accurate information, fairness, and diversity of content.” The published version, entitled “What are you optimizing for? Aligning Recommender Systems with Human Values,” omitted credit to Google researchers. Reuters could not determine why.
One of the biggest challenges with AI is to have explainable AI—an understanding of how the system came to its decisions. Due to the nature of how AI systems are developed and trained, this is a formidable challenge on its own.
As algorithms are increasingly used to make decisions that impact our lives, these algorithms' creators have a bigger responsibility to address their shortcomings and work to improve them.
When leading AI companies like Google actively suppress issues that their researchers discover, force them to remove references to Google products, and ask them to withdraw their names as authors from research papers, it creates a dangerous precedent that can significantly impact people’s lives.
This is highly unethical behaviour. How can anyone trust any Google authored research anymore? And more importantly, how can we trust that Google is fixing the issues in their products that these researchers discover?
I am normally against regulation as I believe it stifles innovation, but non-regulation puts the responsibility of ethical use, on those building these technologies. When they abuse their power and disregard what’s in the public and society's best interest, I see no other way to reign them in other than regulation.
Quote of the week
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”
—Margaret Mitchell, Senior Scientist at Google, from the article, “Google told its scientists to 'strike a positive tone' in AI research - documents” [Reuters]
I wish you a brilliant day ahead :)
Neeraj