CRISPR therapy for rare blood disease delivers “life-changing” results / Humans + Tech - #93
+ Deleting unethical data sets isn’t good enough + Is robot therapy the future? + Other interesting articles from around the web
Hi,
This week, there is a lot to digest, from successful CRISPR therapies for rare diseases to unethical use of research data sets and robot therapy for mental health. Let’s get into it.
CRISPR therapy for rare blood disease delivers “life-changing” results
In 2019, researchers started the first CRISPR human clinical trials in the USA. CRISPR allows researchers to alter DNA sequences and modify gene function. CRISPR can be used to fix genetic defects and treat various diseases. The trials in the USA were aimed at treating two rare blood diseases—beta-thalassemia and sickle cell disease. After nearly three years, the results have been overwhelmingly positive [Rich Haridy, New Atlas].
So far, data from 22 patients has been announced, and the results are about as good as one could hope. All 15 beta-thalassemia patients displayed clinically meaningful improvements to fetal hemoglobin levels. Before the CRISPR therapy each of the patients required ongoing blood transfusions, sometimes monthly, but since the infusion every one of them has been transfusion-free.
All seven sickle cell patients also showed significant improvement, free of any vaso-occlusive crises, the primary clinical indication of disease.
"What we're seeing in these early days is how transformational this is for the sickle cell patients we've seen," says Stephen Grupp, a researcher working on the trials. "We are hearing that it is life-changing."
Researchers still need to monitor the patients to understand the long-term efficacy, but they are pretty positive based on the results over the last three years.
Deleting unethical data sets isn’t good enough
In the recent past, researchers gathered huge data sets for AI research. One example is the MS-Celeb-1M, a data set of 10 million images of 100,000 celebrities’ faces. Microsoft compiled the dataset and released it to promote advancement in facial recognition.
However, the data sets were found to contain not just celebrities but pictures of journalists, artists, activists and academics too. Their photographs were added to the data set without their consent. Microsoft eventually removed the data set, but the internet never forgets, and the data set is still available from other sources [Karen Hao, MIT Technology Review].
But a new study shows that this has done little to keep the problematic data from proliferating and being used. The authors selected three of the most commonly cited data sets containing faces or people, two of which had been retracted; they traced the ways each had been copied, used, and repurposed in close to 1,000 papers.
[…]
Part of the problem, according to the Princeton paper, is that those who put together data sets quickly lose control of their creations.
Data sets released for one purpose can quickly be co-opted for others that were never intended or imagined by the original creators. MS-Celeb-1M, for example, was meant to improve facial recognition of celebrities but has since been used for more general facial recognition and facial feature analysis, the authors found. It has also been relabeled or reprocessed in derivative data sets like Racial Faces in the Wild, which groups its images by race, opening the door to controversial applications.
The study recommends certain practices that the AI community should incorporate, such as adequately documenting and licensing the data sets, placing harder limits on access to their data, establishing norms on how data should be collected, labelled, and used.
Margaret Mitchell, an AI ethics researcher and a leader in responsible data practices, is exploring creating data set stewardship organisations—teams of people that not only handle the curation, maintenance, and use of the data but also work with lawyers, activists, and the general public to make sure it complies with legal standards, is collected only with consent, and can be removed if someone chooses to withdraw personal information.
Is robot therapy the future?
The use of telehealth and telemedicine has exploded during the pandemic. With mental health increasingly affected by the pandemic, several companies have jumped into providing therapy through apps that use various digital services such as texting, chatbots, and even AI therapists to help people [Eva Wiseman, The Guardian]. While some see it as a positive, others are not very enthusiastic about how these apps work, as the metrics and KPIs used to optimise them are not ideal.
Wiseman reached out to several users of apps for the article.
I got in touch with a number of users of automated therapy apps and platforms, some of whom found them useful (they appreciated the ease and the affordability – one man told me he felt more comfortable talking online to a therapist and that it had helped him open up) but, inevitably, it was the people who’d had bad experiences who were most keen to talk to me.
Wiseman also spoke to Elizabeth Cotton, a former psychotherapist working in the NHS and currently an academic at Cardiff Metropolitan University. Her recent research is focused on the “Uberisation of mental health”.
One concern she has about online therapy platforms is the introduction of what she calls “Therapeutic Tinder” and how that changes a person’s relationship with their therapist. “They’re sold on the basis that you can see a therapist at any point in time and a text conversation becomes an appropriate way of having therapy. And if you don’t like your therapist, you just change them next week. What does that do to the therapeutic alliance, if you’re constantly at risk of being swiped, does that affect your practice?” She has no doubt that it does. “Good therapy means you can absolutely loathe the sight of your therapist. Now the impulse is to be more attractive as a therapist. The pressure is always to go light, rather than go deep.” Whenever innovation promises to provide cheaper access to something millions of people want, big businesses enter and monopolise the market. Which is dangerous when the product is better mental health. If a therapeutic relationship is based on trust and communication, then putting that relationship in the hands of tech companies, an industry rarely applauded for its trustworthiness or safety, threatens its very foundations. The more we talk, the more terrifying her vision of the future of therapy seems.
Cotton brings up an excellent point. If the apps allow people to change therapists at will, based on if they like them or not, it changes the behaviour of therapists to retain their clients. They may not push their clients to face the difficult emotions or make them answer hard questions that could lead to breakthroughs in their treatment. Instead, they may appease the client in an effort to retain them.
Other interesting articles from around the web
✍️ Analog pens, Apple’s pencil & talking machines: writing & its future [Om Malik, OM.co]
Another gem of an article from Om Malik about the difference in feeling when writing with a pen on paper vs a stylus on devices. He also discusses the pivot to voice as an input that may eventually relegate keyboards into the background.
As you know, I am a big advocate of writing on paper with a pen. Manystudieshave shown that we learn and retain more information when we write with our hands. Sure typing can let us capture more information, but writing gives more cognitive context. Today, we mostly type on our keyboards. Some of us have started to use Apple iPad and the Apple Pencil. Jon Callaghan, my partner at True Ventures, is an unabashed fan of Remarkable.
The challenge with these digital writing devices is that they are a “one size fits all” solution. In the analog world, writing instruments are highly personal, and each one fits our unique writing styles, and where we fall in the demographic spectrum — age, gender, and geographic locations define what we use to write. Writing surfaces, aka the paper we like, too are highly personal.
Like Om prefers pen and paper, I still prefer reading an actual book vs reading on a device. Yes, devices have many advantages, but it’s the feeling of holding an actual book, flipping back quickly to refer to something, and the ease on the eyes that I don’t get from reading on a device.
👨⚕️ The pain was unbearable. So why did doctors turn her away? [Maia Szalavitz, WIRED]
NarxCare is a system by a company called Appriss that doctors, pharmacies, and hospitals in the US use to identify a patient’s risk of misusing opioids. NarxCare uses machine learning algorithms on state prescription databases to identify a patient’s risk of misusing opioids by analysing the number of pharmacies a patient has visited, the distances travelled to receive health care, and the combinations of prescriptions received.
Kathryn, a patient with endometriosis, a condition that results in agonising pain, managed her pain by taking oral opioids. Her pain was particularly severe at one point, and fearing a life-threatening growth in her ovary, she was admitted to the hospital for observation. On the fourth day, the hospital staff informed her that she would no longer receive opioids. Two weeks after she was discharged, her gynaecologist terminated their relationship based on a report from the NarxCare database. After spending days researching why the system flagged her, she discovered it was because of her sick pets.
At the time of her hospitalization, Kathryn owned two flat-coated retrievers, Bear and Moose. Both were the kind of dog she preferred to adopt: older rescues with significant medical problems that other prospective owners might avoid. Moose had epilepsy and had required surgery on both his hind legs. He had also been abused as a puppy and had severe anxiety. Bear, too, suffered from anxiety.
The two canines had been prescribed opioids, benzodiazepines, and even barbiturates by their veterinarians. Prescriptions for animals are put under their owner's name. So to NarxCare, it apparently looked like Kathryn was seeing many doctors for different drugs, some at extremely high dosages. (Dogs can require large amounts of benzodiazepines due to metabolic factors.) Appriss says that it is “very rare” for pets’ prescriptions to drive up a patient’s NarxCare scores.
Machine learning and AI have still not matured enough to trust them implicitly. Denying treatment to patients based on a flawed system without manually verifying the decisions by these systems is irresponsible.
Quote of the week
“They’re sold on the basis that you can see a therapist at any point in time and a text conversation becomes an appropriate way of having therapy. And if you don’t like your therapist, you just change them next week. What does that do to the therapeutic alliance, if you’re constantly at risk of being swiped, does that affect your practice? Good therapy means you can absolutely loathe the sight of your therapist. Now the impulse is to be more attractive as a therapist. The pressure is always to go light, rather than go deep.”
—Elizabeth Cotton, an academic based at Cardiff Metropolitan University whose recent research is focused on the “Uberisation of mental health”, talking about online therapy platforms in the article, “Is robot therapy the future?” [The Guardian]
I wish you a brilliant day ahead :)
Neeraj