Using AI to help people make babies / Humans + Tech - #65

+ Your TV is watching what you watch + Apple vs. Facebook + New algorithms could reduce racial disparities in health care + Other interesting articles

Hi,

I hope you had a great week.

How an Israeli startup is using AI to help people make babies

Since IVF started in 1978, the success rate is only 22-30%. A big part of the success of IVF is the quality of the embryo. This determination is currently made by doctors who examine the embryos under a microscope and choose one they feel is the most healthy and viable. But even with the help of a microscope, the human eye is limited, and many doctors base their decisions on gut feelings. Embryonics, a female-founded startup from Israel, uses artificial intelligence to screen embryos instead [Vanessa Bates Ramirez, Singularity Hub].

This is where Embryonics’ technology comes in. They used 8,789 time-lapse videos of developing embryos to train an algorithm that predicts the likelihood of successful embryo implantation. A little less than half of the embryos from the dataset were graded by embryologists, and implantation data was integrated when it was available (as a binary “successful” or “failed” metric).

The algorithm uses geometric deep learning, a technique that takes a traditional convolutional neural network—which filters input data to create maps of its features, and is most commonly used for image recognition—and applies it to more complex data like 3D objects and graphs. Within days after fertilization, the embryo is still at the blastocyst stage, essentially a microscopic clump of just 200-300 cells; the algorithm uses this deep learning technique to spot and identify patterns in embryo development that human embryologists either wouldn’t see at all, or would require massive collation of data to validate.

Embryonics says that its algorithm results in a 12% increase in identifying healthy embryos and a 29% increase in identifying embryos that will not result in a successful pregnancy. This is in comparison to predictions from embryologists.

TechCrunch reported last week that in a pilot of 11 women who used Embryonics’ algorithm to select their embryos, 6 are enjoying successful pregnancies, while 5 are still awaiting results.

IVF is very costly, and stressful on the parents, especially the mother, both physically and emotionally. If Embryonics’ technology can improve the success rates of pregnancies for hopeful parents, it reduces their emotional, physical, and financial stress.


Your TV is watching what you watch

Making money from hardware is hard. Making electronics efficiently at scale, at the price points that are competitive, and that people will still buy, is extremely challenging. So they resort to making up the profits in other ways via data harvesting and selling personal information [Shira Ovide, OnTech, The New York Times].

And my bigger worry is that the difficulties of making it in hardware are nudging gadget sellers do yucky things to us.

Popular brands of TV sets keep track of what we’re watching and report it to companies that want to sell us new cars or credit cards. (Yeah, it’s gross.) One reason they do it is that selling personal information is pure profit, whereas selling you a TV set is definitely not. Roku also makes its real money not from selling its gizmos that connect our TVs to streaming apps, but from its side gigs including its troves of information about what we watch that it uses to sell ads.

You can think of these consumer electronics companies as basically Facebook that happens to sell us the screens, too. I don’t know about you, but that makes me feel less affectionate about my marathon sessions of “Cobra Kai.”

The saying goes that if you don’t pay for it, then you’re the product. But it looks like even if we pay for it, we are the product.

I would happily pay a little extra for a product if they agreed not to sell my data. I wish these companies were forthright in telling us how they collect and use our data instead of hiding it in privacy statements in tiny fonts and legalese. From their perspective, they probably make a lot more money through data harvesting over a TV’s lifetime than they would from charging a little extra upfront. So there is less incentive to do so.

Interestingly, privacy is also the subject of the brewing feud between Apple and Facebook. Read further below.


Apple vs Facebook

Apple has been introducing many privacy features on its devices in the last few months and they plan to release a new feature in the upcoming months that seriously threatens companies that make their money from advertising like Facebook and Google.

In their last update, Apple released privacy nutrition labels that display all the data that apps link to you and the data used to track you. In their next update, they will begin enforcing App Tracking Transparency which will require apps to get the user’s permission before tracking their data across apps and websites owned by other companies. IDFA (ID For Advertisers) is what is used to track users for targeted advertising. Users will have control over which apps are tracking them and make changes.

Apple also released a white paper—“A Day in the Life of Your Data” [Apple]— that illustrates how companies track user data across websites and apps using the example of a typical father and daughter doing everyday things in the current digital ecosystem.

Although publicly, Google has been quiet, Facebook has lashed out at Apple [Samuel Axon, Ars Technica]. There are rumours that Facebook may even file an antitrust lawsuit against Apple, claiming that Apple’s control of the iOS App Store and the rules that they impose are anti-competitive.

Apple will move ahead with the IDFA change—first in the next beta release of iOS, and then publicly in the spring. It will do so as it faces a potential lawsuit from Facebook and ongoing antitrust lawsuits, investigations, and scrutiny.

But amidst all that, the battle Apple and Cook are waging with Facebook and Zuckerberg is a battle for public opinion and sentiment.

It's unclear at this stage which side of this issue has an advantage on that front in the long run—Facebook is consistently named as one of the most hated companies in the United States, but investigations into actual user preference and behavior suggest that Apple may be overestimating users' sensitivities around privacy.

Studies and surveys have found that, while a significant majority of users say they care about privacy, most of those users say they would rather give it up than pay more upfront for software and services, for example. Users don't want to be the product being sold—but they don't want to buy a product either, and most seem to see becoming a product as the lesser evil compared to footing any part of the bill from their own pockets.

I’m definitely #TeamApple :)

🎩 Thanks to Beatrice for sending this in. Beatrice makes eco-friendly charcoal briquettes in Kenya.


New algorithms could reduce racial disparities in health care

In Issue #32 - Hue-mans: Differentiation vs Discrimination, I linked to a study from the journal Nature, that stated that millions of Black people had been affected by racial bias in health-care algorithms widely used in US hospitals.

Most health-care algorithms are trained on data from doctors and medical professionals. Researchers from Stanford, Harvard, UC Berkeley, and the University of Chicago have published a new paper in which they tried a new approach by training the algorithms to learn by using patient responses instead [Tom Simonite, WIRED] and found that the algorithms found problems that doctors miss—especially in Black people.

A study published this month took a different approach—training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed that radiologists may have literal blind spots when it comes to reading Black patients’ x-rays.

[…]

The study is notable not just for showing what happens when AI is trained by patient feedback instead of expert opinions, but because medical algorithms have more often been seen as a cause of bias, not a cure. In 2019, Obermeyer and collaborators showed that an algorithm guiding care for millions of US patients gave white people priority over Black people for assistance with complex conditions such as diabetes.

There’s a catch though: Algorithmic explainability. Neither the algorithms nor the researchers can explain what the algorithms notice in the x-rays that the doctors have been missing. Judy Gichoya, a radiologist and assistant professor at Emory University, is working on solving this.

I’ve highlighted the explainability issue in AI as a problem in previous newsletters. Although in this case, the algorithms are covering a blind spot that the doctors have, the inability to reverse engineer the algorithm’s decisions means that we can not learn about what we are missing. It makes it far more dangerous when algorithms make decisions in areas such as law and insurance and hand out judgements without explainability.

We are training algorithms that can’t teach us back. This can’t be good in the long run.


Other interesting articles from around the web

😷 Signs of Unusual Symptoms Spread on Twitter Well Before Official COVID-19 Reports [Tessa Koumoundouros, Science Alert]

A group of researchers have published a study that provides evidence that social media can be a useful epidemiological surveillance tool.

"They can help intercept the first signs of a new disease, before it proliferates undetected, and also track its spread."

🤳 The scary future of Instagram [growth.design]

A beautifully illustrated and animated case study by growth.design on the scary future of Instagram as it increasingly focusses on monetization.

☀️ 2020 Is The Year Europe Created More Energy From Renewables Than Fossil Fuels [Carly Casella, Science Alert]

A newly published report confirms that in 2020, wind, solar, hydropower and biomass shouldered 38 percent of the EU's electricity demands, while fossil fuels trailed at 37 percent.

A rare positive from 2020.

🤗 Technology promises hugs at a distance. Beware what you wish for [Andrew Wold and Rebecca Böhme, Psyche]

Missing hugs from your grandma? Well, you might both be the target market for a trademarked ‘Hug Shirt’ that vibrates in the areas where someone has ‘saved’ a hug: just ‘record’ a hug in your shirt and send it to your grandma’s receiving shirt.

Andrew and Rebecca warn that changing the norms of our social interactions via technology could have unintended consequences.

Don’t underestimate this problem. We’ll need to rediscover closeness and the idea that touching other people isn’t necessarily dangerous from a health perspective. We must be conscious of any associations we’ve made between touch and viruses, and actively overcome them. If the associations remain, we’ll probably continue to avoid touch, which won’t help us start to heal, but could in fact lead us to feel more isolated in the future. The time after the pandemic might be critical: are we going to get over our fears and remember how wonderful it is to hug our loved ones? Or are we going to stick with the new normal and let new haptic technologies creep into our lives to fulfil our touch desires?

Hugs are what I miss the most out of all human interactions.


Quote (Twitter thread) of the week

This week’s quote of the week is actually a Twitter thread from Karen Hao of MIT Technology Review. I came across this thread earlier this week and wholeheartedly agreed with Karen, so I thought I would share it with you.

With that, I wish you a brilliant day ahead :)

Neeraj