An internet that never forgets / Humans + Tech - #14
+ The secret market for your web browsing data, Vaporfly shoes maybe considered technology doping, Content moderators and PTSD, and Brain organoids and ethics.
Hello,
Hope you enjoyed your week.
I spent a good amount of time thinking about Article #1 below (Why an internet that never forgets is especially bad for young people).
I am glad I was able to experience childhood without the internet, mobile phones, and cameras all around us. We were able to be kids, make mistakes, learn from them, and not worry that our actions as children would follow us for the rest of our lives.
That is not a luxury that kids have these days.
Their actions, antics, and unfiltered thoughts are on social media, embedded in the fabric of the internet almost for eternity. They are forever judged on their previous mistakes, as the internet never forgets.
Kate Eichhorn, the author, even suggests that in a world where your past defines your future, young people may never change their behavior or minds as the negative perceptions about them based on their past will never change. This will increase partisan politics and extremism as they will be less likely to change their identities and perspectives.
Onto this week’s curated articles:
🌐Why an internet that never forgets is especially bad for young people [MIT Technology Review]
💻Leaked documents expose the secretive market for your web browsing data [Vice]
👟Vaporfly shoes will help me reach my marathon dream. Should I use them? [The Guardian]
📺The terror queue [The Verge]
🧠An ethical future for brain organoids takes shape [Quanta Magazine]
… and links to 10 more articles further below.
1. Why an internet that never forgets is especially bad for young people
Kate Eichhorn, writing for MIT Technology Review:
Until the end of the 20th century, most young people could take one thing for granted: their embarrassing behavior would eventually be forgotten. It might be a bad haircut, or it might be getting drunk and throwing up at a party, but in an analog era, even if the faux pas were documented in a photograph, the likelihood of its being reproduced and widely circulated for years was minimal. The same held true for stupid or offensive remarks. Once you went off to college, there was no reason to assume that embarrassing moments from your high school years would ever resurface.
Not anymore. Today, people enter adulthood with much of their childhood and adolescence still up for scrutiny. But as past identities and mistakes become stickier, it’s not just individuals who might suffer. Something much larger—the potential for social change and transformation—may also be at risk.
Kate discusses in this article how schools constantly monitor students, how past social media posts are affecting children’s job and internship prospects and the pressures on children to maintain a perfect public identity.
Youth is a time to make mistakes and learn from them but the internet is changing that since it’s almost impossible to delete anything that is put up online. They are judged by their past mistakes even if they’ve learned from it and changed.
Kate suggests that in a world where your past defines your future, young people may never change their behavior or minds as the negative perceptions about them based on their past will never change. This will increase partisan politics and extremism as they will be less likely to change their identities and perspectives.
2. Leaked documents expose the secretive market for your web browsing data
Joseph Cox, writing for Vice:
An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.
The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.
After this investigation, Avast announced it will stop the Jumpshot data collection and close down Jumpshot immediately.
But without being exposed, they would have continued harvesting and selling this user data. Most people are completely unaware that their browsing habits are being tracked and sold for millions of dollars by the browser extensions meant to protect them from harm on the internet.
3. Vaporfly shoes will help me reach my marathon dream. Should I use them?
James Doward, writing for The Guardian:
For once, the hype matched reality. Using data taken from users of the running app Strava, the New York Times reported that, yes, it was just as Nike claimed: the shoes could shave about 4% off a marathon time for certain runners. For someone who runs marathons in around three hours and eight minutes, this sort of saving was the equivalent of blood-doping.
This is because the Vaporfly combines a carbon plate and compressed foam to give runners incredible returns on energy expended.
The greatest marathon runner of all time, Eliud Kipchoge, used a variant of the Vaporfly when he ran his sub-two-hour marathon last year. The next day, Brigid Kosgei ran 2:14.04 at the Chicago marathon in a pair of modified Vaporflys, taking 81 seconds off Paula Radcliffe’s 16-year-old world record.
The world athletics ruling body is looking into this and may pass tighter regulations governing shoe technology.
But, better technology has been aiding humans for many years now in all sports. From lighter shoes and clothing to better sports medicine, better body physiology monitoring, better nutrition, and better equipment, technology has been aiding humans to become better and faster for many years.
Where is that fine line between acceptable technological assistance and technology doping?
4. The terror queue
Casey Newton, writing for The Verge:
Google and YouTube approach content moderation the same way all of the other tech giants do: paying a handful of other companies to do most of the work. One of those companies, Accenture, operates Google’s largest content moderation site in the United States: an office in Austin, Texas, where content moderators work around the clock cleaning up YouTube.
Peter is one of hundreds of moderators at the Austin site. YouTube sorts the work for him and his colleagues into various queues, which the company says allows moderators to build expertise around its policies. There’s a copyright queue, a hate and harassment queue, and an “adult” queue for porn.
Peter works what is known internally as the “VE queue,” which stands for violent extremism. It is some of the grimmest work to be done at Alphabet. And like all content moderation jobs that involve daily exposure to violence and abuse, it has had serious and long-lasting consequences for the people doing the work.
In the past year, Peter has seen one of his co-workers collapse at work in distress, so burdened by the videos he had seen that he took two months of unpaid leave from work. Another co-worker, wracked with anxiety and depression caused by the job, neglected his diet so badly that he had to be hospitalized for an acute vitamin deficiency.
Casey’s investigation uncovers the trauma that content moderators go through, with little help from Google to manage their long-term health effects. Contractors are subject to low pay, immigrants are exploited, medical leave policies for contractors are not as good as those offered to full-time employees, there is a lack of clarity during the hiring process as to the amount of disturbing content they will have to view, they have experiments conducted on them to try and reduce their emotional harm, and even with the best medical care, they cannot avoid developing PTSD, anxiety, and other long-term mental health issues.
Content moderation is a required task. Digital platforms are under a lot of pressure to remove harmful content as soon as possible. However, it’s Google’s responsibility to take care of these workers and minimize the amount of time they are exposed to extreme content. They are not doing enough to reduce the long-term health effects on these workers.
Last year, Casey also reported on Facebook’s content moderators hired via Cognizant, facing similar issues and a lack of support from their employers.
And it’s not that they are unaware. Casey writes:
Ultimately, I can’t say it any more clearly than Google’s own researchers: “There is … an increasing awareness and recognition that beyond mere unpleasantness, long-term or extensive viewing of such disturbing content can incur significant health consequences for those engaged in such tasks.”
And yet, at Google, as at Facebook, workers are discouraged from even discussing those consequences. Managers who warn them that they can be easily replaced, coupled with the nondisclosure agreements that they are forced to sign upon taking the job, continue to obscure their work.
Content moderators are critical to keeping these gruesome videos from being exposed to the general public. They are required in large numbers due to the volume of content that is generated. And it’s clear that something needs to change to eliminate the long-term effects of their jobs. Whether this will come from the platform owners themselves, under pressure from their employees, or through laws remains to be seen.
5. An ethical future for brain organoids takes shape
Jordana Cepelewicz, writing for Quanta Magazine:
Part of the brain’s allure for scientists is that it is so deeply personal — arguably the core of who we are and what makes us human. But that fact also renders a large share of imaginable experiments on it monstrous, no matter how well intended. Neuroscientists have often had to swallow their frustration and settle for studying the brains of experimental animals or isolated human neurons kept alive in flat dishes — substitutes that come with their own ethical, practical and conceptual limitations.
A new world of possibilities opened in 2008, however, when researchers learned how to create cerebral organoids — tiny blobs grown from human stem cells that self-organize into brainlike structures with electrically active neurons. Though no bigger than a pea, organoids hold enormous promise for improving our understanding of the brain: They can replicate aspects of human development and disease once thought impossible to observe in the laboratory. Scientists have already used organoids to make discoveries about schizophrenia, autism spectrum disorders and the microcephaly caused by the Zika virus.
Yet the study of brain organoids can also be fraught with ethical dilemmas. “In order for it to be a good model, you want it to be as human as possible,” said Hank Greely, a law professor at Stanford University who specializes in ethical and legal issues in the biosciences. “But the more human it gets, the more you’re backing into the same sorts of ethics questions that are the reasons why you can’t just use living humans.”
I’m particularly interested in this research in organoids, as one of the dilemmas that they are trying to solve is what signals consciousness.
10 more awesome articles
Moscow rolls out live facial recognition system with an app to alert police [The Verge]
Facebook will now show you exactly how it stalks you — even when you’re not using Facebook [The Washington Post]
Met Police could deploy facial recognition against protesters [Computer Weekly]
Amazon Engineer: ‘Ring should be shut down immediately and not brought back’ [The Next Web]
How phishing attacks trick our brains [MIT Technology Review]
You’ve got snail mail: Targeted online ads are now literally following you home [The Washington Post]
Fingerprints can now be dated to within a day of when they were made [The Economist]
AI-designed drug to enter human clinical trial for first time [Financial Times]
A squishy robot hand can sweat to cool itself down [MIT Technology Review]
Comedy written for the machines - even TikTok’s creators can’t explain how it’s algorithm works [The New York Times Magazine]
Quote of the week
This week’s quote is inspired by Article #1, and the recent uprise in the deployment of facial recognition technology, worldwide.
“The question of the right to privacy must be one of the defining issues of our time.”
― Ziad K. Abdelnour
Wish you a brilliant day ahead :)
Neeraj