Humans + Tech

Share this post

Stop spying on kids / Humans + Tech - #79

www.humansplustech.com

Stop spying on kids / Humans + Tech - #79

+ A simple address unlocks new life for Indian slum dwellers + Apple’s AirTag trackers made it frighteningly easy to ‘stalk’ me in a test + Other interesting articles from around the web

Neeraj Kamdar
May 9, 2021
1
Share this post

Stop spying on kids / Humans + Tech - #79

www.humansplustech.com

Hi,

I hope you had a great week.

If you have kids, you should read the first article below.

Humans + Tech - Issue #79

Stop spying on kids

Big Tech companies are not sparing kids when it comes to data harvesting and profiting from it. More than data harvesting, the manipulative practices designed to addict children are more dangerous. There are serious ethical questions about their techniques and their impact on children's mental health, safety, and privacy [Stop Spying On Kids].

Similar to candy cigarettes, the rollout of YouTube Kids, Messenger Kids, and Instagram for Kids are based on the early recruitment of users into “wastelands of vapid” content—compounding harms by addicting kids early. Products like these are deeply inadequate to the task of protecting children, given that they are still foundationally based on a business model that demands spying on, manipulating, and profiting from young users.

The ad market for children is worth around $1.7 Billion. Companies have no incentive to stop these practices as they answer to their shareholders. It’s high time that governments stepped in and regulated the tech industry, especially when it comes to profiting off children.

Human rights, tech, and parent groups are calling for four major changes to protect kids online:

  1. Ban addictive and manipulative app features.

  2. Ban micro-targeted advertising.

  3. Limit data collection itself.

  4. Ban biometric stalkerware and data collection.

You can sign up your organisation to endorse their cause at the link. I signed up Humans + Tech.

Related Articles

+ TikTok paid $92 Million to settle dozens of lawsuits in the US [Bobby Allyn, NPR]

The cases were primarily filed on behalf of minors, alleging that TikTok harvested personal data from users without consent and shared the data with third parties.

+ TikTok faces privacy lawsuit on behalf of millions of children in the UK and the EU [Ellen Milligan, Bloomberg]

The suit seeks to stop TikTok from “illegally processing millions of childrens’ information” and demands TikTok delete all personal information.

+ Two children sue Google for allegedly collecting students' biometric data [Richard Nieva, CNET]

Two children from Illinois are suing Google for allegedly collecting biometric data, including face scans, of millions of students through the search giant's software tools for classrooms.


A simple address unlocks new life for Indian slum dwellers

In India, 37 million households live in informal housing, commonly known as slums. Not being able to provide an address has hampered their lives as they cannot get bank accounts, postal services, and social benefits [Rina Chandran, Reuters].

Addressing the Unaddressed, a Dublin-based non-profit organisation has developed technology to issue geo postal codes to each house in the slums. Known as the GO Code system, it is a nine-digit unique ID printed on blue laminated strips outside each house.

Pigot, along with Tina Roche, chief of the philanthropic Community Foundation of Ireland, traveled to India in 2012 to assess the need for addresses. With the Hope Foundation, which works to protect children, they then created a system for Kolkata’s Chetla slum, giving each home a nine-digit unique ID.

They convinced bank officials to recognize the codes as a legitimate postal address.

They trained postmen how to deliver mail to a code, and won the right to use it to access benefits.

About 250 homes can be coded every week, at a cost of 150 rupees ($2.25) each, said Pigot, who is working with Google on adding the slum lanes to its maps.

“At first, many residents were concerned that getting a code would make them more vulnerable to eviction since authorities could now find them easily,” said Geeta Venkadakrishnan, director of the Hope Foundation in Kolkata.

“But once they saw how it would help them get identification documents and open bank accounts, they were convinced of its benefits,” she told the Thomson Reuters Foundation.

The foundations are also helping the slums by working with authorities to create registered slums. Dwellers can then use these addresses to show proof of ownership or proof of legal residency to prevent evictions.

I love stories like this that show how technology is being developed to benefit humans.


Apple’s AirTag trackers made it frighteningly easy to ‘stalk’ me in a test

Apple released their AirTags earlier this month. Primarily designed to track and find lost items such as your car keys, people are concerned about how it can be misused to track others, especially in domestic abuse cases. Apple has considered some of these possibilities and designed some safeguards into the technology. Still, they don’t go far enough, are easily bypassed in some cases, and can provide an inexpensive and effective means of stalking [Geoffrey A. Fowler, The Washington Post].

To put Apple’s personal security protections to the test, my colleague Jonathan Baran paired an AirTag with his iPhone, slipped his tag in my backpack (with my permission), and then tracked me for a week from across San Francisco Bay.

I got multiple alerts: from the hidden AirTag and on my iPhone. But it wasn’t hard to find ways an abusive partner could circumvent Apple’s systems. To name one: The audible alarm only rang after three days — and then it turned out to be just 15 seconds of light chirping. And another: While an iPhone alerted me that an unknown AirTag was moving with me, similar warnings aren’t available for the roughly half of Americans who use Android phones.

The concerns raised in the article are particularly scary when it comes to domestic abuse. The audible alarm that sounds if an AirTag is not in the vicinity of the owner’s device only sounds after three days. And the 3-day counter resets every time it comes within the range of the owner’s device. So if an abusive partner decided to stalk their spouse, as long as they were close to their spouse once in three days, it’s unlikely that the stalked spouse would even know.

There are various other concerns raised in the article. To Apple’s credit, they have done a lot more thinking about privacy than some of their competitors like Tile, who have no safeguards in place, and Apple says they can tweak many of the settings with a software update. But there are issues with the hardware that can’t be fixed easily, such as being able to muffle the sound from the speakers to stay undetected.


Other interesting stories from around the web

+ How to stop AI from recognising your face in selfies [Will Douglas Heaven, MIT Technology Review]

Emily Wenger at the University of Chicago and colleagues have developed a tool called Fawkes that deliberately alters photos to fool AI facial recognition systems.

Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans.

+ An A.I. called Dr Fill won an elite crossword tournament [Oliver Roeder, Slate]

A computer program topped the leaderboard for the first time at the American Crossword Puzzle Tournament.

Dr. Fill is the algorithmic creation of Matt Ginsberg, an Oxford-trained astrophysicist—and computer scientist, stunt pilot, bridge player, novelist, and magician—who lives in Oregon. When he began the project a decade ago, his motivation was simple: “I sucked at crosswords, and it just pissed me off.” Ginsberg hoped one day to walk into the tournament hall, wave his laptop above his head, and show the humans who’s boss. Now, if only virtually, he has.

+ Microsoft scientist: Emotion-Reading AI Is doomed to fail [Dan Robitzski, Futurism]

Scientists have been trying for a very long time to codify expressions into emotions. But they have failed because expressions that correspond to different emotions vary across cultures.

Take, for instance, the Transportation Security Administration’s facial expression screening algorithm, SPOT, which Crawford wrote was meant to automatically spot terrorists after 9/11 by pinpointing travelers expressing stress, fear, or deception. Despite spending $900 million on the algorithm, there’s no evidence suggesting it ever worked.


Quote of the week

“This technology can be used as a key by an individual to lock their data. It’s a new frontline defense for protecting people’s digital rights in the age of AI.”

—Daniel Ma, Deakin University, Australia, from the article, “How to stop AI from recognizing your face in selfies” [MIT Technology Review]

I wish you a brilliant day ahead :)

Neeraj

Share this post

Stop spying on kids / Humans + Tech - #79

www.humansplustech.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 HumansPlusTech.com
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing