He predicted the dark side of the Internet 30 years ago. Why did no one listen? / Humans + Tech - Issue #94

+ Deepfakes are now making business pitches + Using AI-based software to monitor online exams is a really bad idea, says uni panel + Other interesting articles from around the web


I was fascinated to learn the story of Philip Agre. He accurately predicted many of the problems that the internet and AI would cause in society and to humans around 30 years ago. I urge you to read the article as well - it’s an insightful read.

He predicted the dark side of the Internet 30 years ago. Why did no one listen?

Philip Agre foresaw with astonishing accuracy what the internet would become and how it would affect society. This was in the 90s before most people had an email address or even a personal computer. Unfortunately, no one heeded his advice then and only now is his work being restudied [Reed Albergotti, The Washington Post].

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Agre went underground in 2009. He resurfaces from time to time but only when he chooses. He is otherwise unreachable, and no one knows his whereabouts. Twenty-five years ago, he was deeply frustrated that no one understood what he was saying. Presciently, he warned that self-criticism should be part of the process of building AI. Today, we are already suffering the consequences of this lack of criticism of AI and allowing it to proliferate without any moral, ethical, or legal standards in place.

Today’s AI, which has largely abandoned the type of work Agre and others were doing in the ’80s and ’90s, is focused on ingesting massive amounts of data and analyzing it with the world’s most powerful computers. But as the new form of AI has progressed, it has created problems — ranging from discrimination to filter bubbles to the spread of disinformation — and some academics say that is in part because it suffers from the same lack of self-criticism that Agre identified 30 years ago.

Hopefully, as Agre’s work is being studied in more detail now, the technology industry finally pays heed and charts a better path for AI and other technologies such as facial recognition.

Deepfakes are now making business pitches

EY (formerly Ernst & Young), the accounting giant, is now using deepfakes to deliver client presentations [Tom Simonite, WIRED].

Some partners at EY, the accounting giant formerly known as Ernst & Young, are now testing a new workplace gimmick for the era of artificial intelligence. They spice up client presentations or routine emails with synthetic talking-head-style video clips starring virtual body doubles of themselves made with AI software—a corporate spin on a technology commonly known as deepfakes.

The firm’s exploration of the technology, provided by UK startup Synthesia, comes as the pandemic has quashed more traditional ways to cement business relationships. Golf and long lunches are tricky or impossible, Zoom calls and PDFs all too routine.

EY partners have used their doubles in emails, and to enhance presentations. One partner who does not speak Japanese used the translation function built into Synthesia’s technology to display his AI avatar speaking the native language of a client in Japan, to apparently good effect.

The tech is not perfect, but it’s only going to get better. EY says they carefully control the access to this technology so that it’s not misused. A time will come when business people can use this technology to create presentations for multiple clients with minimal work other than writing the scripts.

Using 'AI-based software like Proctorio and ProctorU' to monitor online exams is a really bad idea, says uni panel

A committee at the University of Texas in Austin advises against using AI software to proctor students’ online tests [Thomas Claburn, The Register]. The psychological impact on students is significant due to the invasive nature of the tools and the high levels of anxiety they cause due to the warnings sent during exams.

But the software that's been deployed has been widely criticized by students and privacy advocates. The concern centers around the inability to audit the software source code and the possibility that these systems rely on flawed algorithms and biased or arbitrary signals to label students cheaters.

Critics also worry that the software can't account for varied student living conditions and is vulnerable to racial bias – eg, motion tracking that produces different results with different skin tones – and cognitive bias such as gaze tracking that flags ADHD behaviors as suspicious.

Such criticism last year led UC Berkeley [PDF] and Baruch College in New York to stop using remote proctoring products. In February, the University of Illinois at Urbana-Champaign said it will drop Proctorio after this summer due to "significant accessibility concerns."

I related my personal experience taking an online proctored exam in Humans + Tech - Issue #64. I can confirm that the experience was horrible, although mine was not AI proctored, but human proctored.

Other interesting articles from around the web

👨‍💼 A CGI replica of Nvidia’s CEO delivered his keynote and no one knew [Tom Maxwell, Input]

It took a lot of hardware and technology to achieve, but the results were very realistic.

Most people hate giving presentations, but thankfully someday you might be able to give one without actually delivering it yourself at all. Nvidia, the maker of popular graphics cards, revealed yesterday that parts of a keynote speech made by its CEO were actually computer-generated animation — an entire virtual replica of Jensen Huang and his kitchen in the background.

SLEIGHT OF HAND — The speech happened in April, and only about 14 seconds of the nearly two-hour presentation were animated. But part of the presentation showed Huang magically disappear and his kitchen explode, which made viewers wonder what exactly was real or rendered. It’s hard to actually identify the fake portion, however, which is the most impressive part.

🩸 AI blood test can spot lung cancers with 90 percent accuracy [Victor Tangermann, Futurism]

Delfi Diagnostics in Baltimore have developed a machine learning-based blood test to detect early-stage lung cancer with remarkable accuracy.

In their study published in the journal Nature Communications, the team outlines how the diagnostic tool can analyze genome-wide cell-free DNA fragmentation (cfDNAs) profiles, nucleic acid fragments present in the bloodstream that can indicate the presence of tumor cells, with astonishing degrees of accuracy.

The tool was able to detect roughly 90 percent of cancer cases out of 800 individuals that were screened for lung cancer.

The researchers are hoping that improved screening and detection technologies could allow more cases of lung cancer to be spotted earlier, which could greatly improve outcomes.

Early detection could save thousands of lives.

😇 Can tech ethics be learned—or is society doomed? [Lauren Goode, WIRED]

Rob Reich, a philosopher and philanthropy scholar; Mehran Sahami, a computer scientist; and Jeremy M. Weinstein, a social scientist, co-authored a book called System Error with the subtitle, “Where Big Tech Went Wrong, and How We Can Reboot It.” They believe the tech industry needs a hard reset, not just a reboot. Lauren Goode interviewed them on some of the issues raised in the book. Here is a small excerpt from the interview.

Lauren Goode: I guess I’ll start by asking, why did you decide to write this book? And why now, why at this point in time?

Weinstein: I think part of what's exciting about the three of us coming together to write this book is that we each bring some unique and distinct motivations that helped to answer that question. As the social scientist and the policymaker of the triad, my motivation is really twofold: Number one, that the social and societal consequences of Big Tech are unbelievably salient, in a way that they haven't been for people in the past. And I'm constantly struck by the degree to which industry is not attempting to anticipate those consequences and mitigate the harms, and the extent to which the government has fallen short of recognizing those harms and addressing them.

Quote of the week

“I also think one of the transformations we’ve seen in the last few decades is that the social impact of technology is so much larger now. It’s so much greater than: Someone buys an application, installs it on their computer, and what does it mean if their spreadsheet crashes? Now it’s: What is happening to our democracy, what is happening to jobs because of AI, and what is happening in automated decision making?”

—Mehran Sahami, Professor at Stanford University, from the article, “Can tech ethics be learned—or is society doomed?” [WIRED]

I wish you a brilliant day ahead :)