The smart toilet era is here! / Humans + Tech - Issue #99
+ China imposes a 40-minute limit for children on Douyin - its version of TikTok + UN human rights commission calls for moratorium on sale of AI tech + Other interesting articles from around the web
Hi,
The detailed article in The Guardian on the smart toilet era made me realise how much tech and data are intertwined. While great technology, implemented well can lead to a much better quality of life, if privacy safeguards are not put in place, the harm it can cause is tremendous.
The smart toilet era is here! Are you ready to share your analprint with big tech?
Several companies and research teams worldwide are working on smart toilets that analyse our excretions and use sensors, AI, and machine learning to detect diseases, drug use, and monitor our health [Emine Saner, The Guardian].
Coprata is working on using sensors and AI to analyse waste.
Smart toilet innovators believe the loo could become the ultimate health monitoring tool. Grego believes her product – which analyses and tracks stool samples and sends the data to an app – will provide “information related to cancer and many chronic diseases”. For general consumers, it will provide peace of mind, she says, by establishing “a healthy baseline”: “Having technology that tracks what is normal for an individual could provide an early warning that a checkup is needed.” For people with specific conditions, such as inflammatory bowel disease, the device could provide helpful monitoring for doctors. “It’s very difficult to know when to escalate or de-escalate treatment,” she says. “Stool-based biomarkers can provide that information.”
Joshua Coon’s lab at the University of Wisconsin-Madison focuses on analysing urine.
In a small study he conducted, two people – one of whom was Coon – saved every urine sample for 10 days. “It turns out that you can detect compounds that are diagnostic of exercise [show you have done some]; you can see when an over-the-counter medication comes into the system and clears out; you can see molecules that correlate with how well you slept, how much fat you had in your diet, what your calorie intake was.”
Japanese manufacturer Toto is working on a wellness toilet.
This year, at the influential annual Consumer Electronics Show, the Japanese manufacturer Toto announced its “wellness toilet” – a concept, but something it is working on (it previously developed a toilet that analyses urine flow). Its sensors – including one for scent – would aim to detect health problems and conditions such as stress, but also make lifestyle suggestions. In one image provided by the company, it envisioned the toilet sending you a recipe for salmon and avocado salad.
Researchers at Stanford are working on analysing our waste (also highlighted in Humans + Tech - Issue #26.)
Researchers at the Stanford School of Medicine have been working on technology that can analyse faeces (including “stool dropping time”) and track the velocity and colour of urine, as well as test it. An article this month in the Wall Street Journal reported that the researchers have partnered with Izen, a Korean toilet manufacturer, and hope to have prototypes by the end of the year. In order to differentiate between users, Izen developed a scanner that can recognise the physical characteristics of whoever is sitting on the toilet – or, in the words of the researchers, “the distinctive features of their anoderm” (the skin of the anal canal). Apparently, your “analprint”, like your fingerprints, is unique.
TrueLoo is developing a toilet seat with optical sensors that analyse its user and their waste, documenting patterns and providing reports to physicians.
Kashyap has developed a toilet seat, TrueLoo, which can be fixed to an existing toilet and recognises the user by their phone (one survey found that a majority of Britons take their phone to the loo) or a combination of physiological parameters: “What do they weigh? How are they sitting on the seat?” It then analyses excreta “using optical methods, looking at things like the volume, clarity, consistency, colour. It’s essentially understanding when someone has abnormal patterns and then it’s capable of documenting those patterns and providing reports that can be used by physicians to help in the treatment of a variety of conditions.”
And finally, with all these companies collecting vital data, there are significant privacy issues to address.
When it comes to information about your bodily waste, Booth says: “What data are companies linking together? What are they trying to analyse about you? Profile you for? You call them ‘smart homes’, but they’re surveillance homes.”
Information from stool and urine samples could provide all sorts of information – your risk of disease, your diet, your exercise level; how much alcohol you drink and whether you take drugs. Even tracking something as trivial as the time of day you use the loo – regularly in the night, for instance, indicating sleeplessness – could reveal conditions such as depression or anxiety.
[…]
It is not so wacky to imagine parents using the technology to check whether their teenage children are using drugs. “Once you start to measure something that is of the body, the privacy line is stepped over,” says Booth. “If you don’t measure what’s going on with someone’s bowel movements, the bowel movement is private.”
Like any technology, smart toilets can help us tremendously by monitoring our health and at the same time cause us significant harm via privacy breaches. The difference lies in how societies will implement the technology.
China imposes a 40-minute limit for children on Douyin - its version of TikTok
In 2018, due to rising levels of near-sightedness among children, China’s regulators sought to limit minors’ amount of time online. Now Douyin, China’s version of TikTok, will restrict the use of the platform to 40 minutes for children under the age of 14 [BBC News].
New educational content - including science experiments, museum exhibitions and historical explainers - has been launched by Douyin as part of Youth Mode.
“Yes, we are more strict with teenagers. We will work harder to provide quality content so that young people can learn and see the world,” the post said.
[…]
Last month, under-18s in China were banned from playing video games during the week, and their play was restricted to just one hour on Fridays, weekends and holidays.
And in February, Chinese children were banned from taking their mobile phones into school.
Although I’m happy that Douyin is imposing limitations on children, I’m uncomfortable that it’s state-imposed.
UN human rights commission calls for a moratorium on the sale of AI tech
The United Nations Human Rights Office called for a moratorium on the sale and use of AI technology such as facial recognition software that poses human rights risks until adequate safeguards are in place [AI Trends].
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” stated Michelle Bachelet, the UN High Commissioner for Human Rights, in a press release.
Bachelet’s warnings accompany a report released by the UN Human Rights Office analyzing how AI systems affect people’s right to privacy—as well as rights to health, education, freedom of movement and more. The full report entitled, “The right to privacy in the digital age,” can be found here.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” Bachelet stated. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”
Other interesting articles from around the web
☪️ AI’s Islamophobia problem [Sigal Samuel, Vox]
It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.”
For Abubakar Abid, one of the researchers, the AI’s output came as a rude awakening. “We were just trying to see if it could tell jokes,” he recounted to me. “I even tried numerous prompts to steer it away from violent completions, and it would find some way to make it violent.”
👁 Computer vision can help spot cyber threats with startling accuracy [Ben Dickson. The Next Web]
Cybersecurity researchers are using deep learning to detect phishing networks and malware.
The researchers’ experiments showed that the technique could detect phishing websites with 94 percent accuracy. “Using visual representation techniques allows to obtain an insight into the structural differences between legitimate and phishing web pages. From our initial experimental results, the method seems promising and being able to fast detection of phishing attacker with high accuracy. Moreover, the method learns from the misclassifications and improves its efficiency,” the researchers wrote.
[…]
Shiaeles is also exploring the use of binary visualization and machine learning to detect malware traffic in IoT networks.
🏥 A new formula may help black patients’ access to kidney care [Tom Simonite, WIRED]
In Humans + Tech - Issue #52, I linked to an article that highlighted how algorithmic racism was blocking black people from having kidney transplants. Now a new formula seeks to erase that disparity by disregarding race.
For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments.
On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.
The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.
Quote of the week
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
—Michelle Bachelet, UN High Commissioner for Human Rights, from the article “UN human rights commission calls for moratorium on sale of AI tech” [AI Trends]
I hope you have a brilliant day ahead :)
Neeraj