Why we should end the data economy / Humans + Tech - #84

+ Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ + McDonald's replaces drive-thru human workers with Siri-like AI + Other interesting articles from around the web

Hi,

I hope you enjoy the tweet posted at the end of the “Other interesting articles” section.

Why we should end the data economy

Anytime we use the internet, we are under constant surveillance by corporations as well as governments. Our phones are the biggest snitches. They broadcast our location, measure how fast we are moving, reveal our activities, and track our habits. In most cases, the apps and services that measure this data know more about us than we may know ourselves. And they use this information against us [Carissa Véliz, The Reboot].

Privacy is important because it protects you from the influence of others. The more companies know about you, the more power they have over you. If they know you are desperate for money, they will take advantage of your situation and show you ads for abusive payday loans. If they know your race, they may not show you ads for certain exclusive places or services, and you would never know that you were discriminated against. If they know what tempts you, they will design products to keep you hooked, even if that can damage your health, hurt your work, or take time away from your family or from basic needs like sleep. If they know what your fears are, they will use them to lie to you about politics and manipulate you into voting for their preferred candidate. Foreign countries use data about our personalities to polarize us in an effort to undermine public trust and cooperation. The list goes on and on.

It’s a long road to ending the data economy. But you can do your part and protect yourself as much as possible by using privacy-respecting services. I mentioned DuckDuckGo and Startpage in Issue #82 as alternatives to Google Search. There are also privacy-respecting email services such as ProtonMail and Hey. And you can also use messaging services like Signal or iMessage instead of WhatsApp.


Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’

Zöe Corbyn interviewed Kate Crawford, who studies the social and political implications of artificial intelligence. Her new book Atlas of AI looks at what it takes to make AI and what’s at stake as it reshapes our world. She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research.

Here is one question from the interview [Zöe Corbyn, The Guardian]:

What’s the aim of the book?
We are commonly presented with this vision of AI that is abstract and immaterial. I wanted to show how AI is made in a wider sense – its natural resource costs, its labour processes, and its classificatory logics. To observe that in action I went to locations including mines to see the extraction necessary from the Earth’s crust and an Amazon fulfilment centre to see the physical and psychological toll on workers of being under an algorithmic management system. My hope is that, by showing how AI systems work – by laying bare the structures of production and the material realities – we will have a more accurate account of the impacts, and it will invite more people into the conversation. These systems are being rolled out across a multitude of sectors without strong regulation, consent or democratic debate.

It’s a very informative interview, and I highly recommend clicking on the link and reading it in full. With all her experience, Crawford sees a need to have strong regulation and more rigour and responsibility around AI systems. She is optimistic about the regulations and guidelines for AI proposed in both the EU and Australia.


McDonald's replaces drive-thru human workers with Siri-like AI

McDonald’s is testing a voice-recognition system to process customer orders at its drive-thru [Dan Robitzski, Futurism].

The fast food giant has been testing out a Siri-like voice-recognition system at ten drive-thru locations in Chicago, CEO Chris Kempczinski revealed during a Wednesday investor conference attended by Nation’s Restaurant News. The system can handle about 80 percent of the orders that come its way and fills them with about 85 percent accuracy — probably annoying for the customers who just want to drive off with their burger — but Kempczinski says a national rollout could happen in as soon as five years.

This part made me smile.

Part of the challenge in automating the drive-thru, Kempczinski said, is that human workers have been too eager to help out while supervising the technology that might one day replace them, preventing it from accruing the real-world data crucial for further improving the system.

It seems like McDonald’s employees are doing their best to preserve their job security 😄.


Other interesting articles from around the web


👞 "Seeing-eye shoes" for the blind to be enhanced with onboard cameras [Ben Coxworth, New Atlas]

Austrian startup Tec-Innovation, in collaboration with the Graz University of Technology, has created the InnoMake shoe which uses ultrasound sensors to warn blind users of obstacles in their path. A proximity sensing module on the toe of each shoe emits ultrasound pulses, then receives the echoes of those pulses off of objects lying ahead.

In this way, it can detect potential obstacles located up to 4 meters (13 ft) in front of the user. That person is warned via a haptic feedback system that causes the shoe to buzz their foot, along with an audible alert sounded on a Bluetooth-linked smartphone.

They are now working on a shoe outfitted with a camera-based AI image recognition system that can learn constantly and provide more specific information.


📱 TikTok just gave itself permission to collect biometric data on US users, including ‘faceprints and voiceprints’ [Sarah Perez, TechCrunch]

A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained.

TikTok says they will ask for consent where required by law, but only Illinois, Washington, California, Texas and New York have biometric privacy laws. The wording in their privacy policy means that technically they may not ask for permission in other states.

The Hacker News reports these changes may be a result of a lawsuit agreement. In February 2021, TikTok paid $92 Million to settle a class-action lawsuit alleging the app violated Illinois’ Biometric Information Privacy Act (BIPA) by capturing biometric data of users without meeting the consent requirements of the state law.


👨🏽‍💼 Artificial Intelligence is taking over job hiring, but can it be racist? [Thomas Reuters Foundation, Deccan Herald]

Many companies are using AI to review job applicants. The AI filter is often the first step, and a machine rejects most applicants without being reviewed by a human.

"It feels like shooting in the dark while being blindfolded - there's just no way for me to tell my full story when a machine is assessing me," Carballo, who hoped to get work experience at a law firm before applying to law school, told the Thomson Reuters Foundation by phone.

[…]

"I worry these algorithms aren't designed by people like me, and they aren't designed to pick people like me," he said, adding that he has undergone a plethora of different AI assessments - from video analytics to custom logic games.

AI hiring systems are notorious for bias, with some associating white names are being more qualified and others disqualifying applicants outright if they attended a women’s college.


🔎 Do you ever feel sorry for machine learning algorithms? [Jordan Hall, @DivineOmega]

Since most of the stories in this issue were not very positive, here is a tweet that made me laugh. I hope it lightens the mood, and you enjoy it too :)


Quote of the week

Companies that accumulate data about you can also end up determining what counts as knowledge about you. They get to categorize and define you, and then treat you accordingly.

—Carissa Véliz, associate professor at the Institute for Ethics in AI at the University of Oxford, from the article, “Why we should end the data economy” [The Reboot]

I wish you a brilliant day ahead :)

Neeraj