Humans + Tech

Share this post

The end of privacy? / Humans + Tech #13

www.humansplustech.com

The end of privacy? / Humans + Tech #13

+ The way we write history has changed, It's time for AI ethics to grow up, New nerve-growing method could help the injured, IBM’s debating AI

Neeraj Kamdar
Jan 26, 2020
Share this post

The end of privacy? / Humans + Tech #13

www.humansplustech.com

Hi,

As far back as 2011, then Google Chairman — Eric Schmidt — warned against facial recognition software, claiming it was too creepy even for Google. A few days back, Sundar Pichai, the current CEO of Google parent Alphabet, and the Financial Times have called for a moratorium on facial recognition while regulations are created around its use.

Twitter avatar for @katecrawford
Kate Crawford @katecrawford
In the last two days, Google and @FinancialTimes editorial board have called for a temporary moratorium on facial recognition - following in the footsteps of many researchers and civil society orgs. https://t.co/p6UHIh7XSe
Image
Twitter avatar for @FinancialTimes
Financial Times @FinancialTimes
The FT View: A new technology with a considerable risk of unintended harms should not be rushed out https://t.co/CEdRP2fXxK
12:42 PM ∙ Jan 22, 2020
572Likes360Retweets

When a company like Google, whose business is based on knowing more about you than you know yourself, issues a call for facial recognition tech to be regulated, and not by them, we should pay attention.

To rub it in, this week also brought the news of Clearview AI, a company that has created a facial recognition app by scraping 3 billion photos off the internet, has sold its app to various law enforcement departments and private companies (Article #3 below). The Metropolitan Police also announced on Jan 24th that they have turned on live facial recognition in London after trialing the technology for two years.

If there is a way to push your government or your leaders to create regulations around facial recognition and AI, do it immediately. Currently, every country is far behind, while the technology is advancing at a break-neck pace.

There is hope though. The EU is considering a 5-year ban on facial recognition in public spaces to give itself time to figure out how to prevent abuse of the technology. And, in his article for Fast Company, the ACLU’s Abdullah Hasan writes:

Last year, communities banded together to prove that they can—and will—defend their privacy rights. As part of ACLU-led campaigns, three California cities—San Francisco, Berkeley, and Oakland—as well as three Massachusetts municipalities—Somerville, Northhampton, and Brookline—banned the government’s use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego’s police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord’s efforts to install face surveillance.

Here are this week’s articles:

  1. ⏳The way we write history has changed [The Atlantic]

  2. 👉It's time for AI ethics to grow up [Wired]

  3. 🕵️‍♂️The secretive company that might end privacy as we know it [The New York Times]

  4. 🧪New nerve-growing method could help injured soldiers and others [Scientific American]

  5. 🤖IBM’s debating AI just got a lot closer to being a useful tool [MIT Technology Review]

… and links to 10 more incredible articles further below.


1. The way we write history has changed

Alexis C. Madrigal, writing for The Atlantic:

While libraries have become central actors in the digitization of knowledge, archives have generally resisted this trend. They are still almost overwhelmingly paper. Traditionally, you’d go to a place like this and sit there, day after day, “turning every page,” as the master biographer Robert Caro put it. You might spend weeks, months, or, like Caro, years working through all the boxes, taking extensive notes and making some (relatively expensive) photocopies. Fewer and fewer people have the time, money, or patience to do that. (If they ever did.)

Enter the smartphone, and cheap digital photography. Instead of reading papers during an archival visit, historians can snap pictures of the documents and then look at them later. Ian Milligan, a historian at the University of Waterloo, noticed the trend among his colleagues and surveyed 250 historians, about half of them tenured or tenure-track, and half in other positions, about their work in the archives. The results quantified the new normal. While a subset of researchers (about 23 percent) took few (fewer than 200) photos, the plurality (about 40 percent) took more than 2,000 photographs for their “last substantive project.”

The driving force here is simple enough. Digital photos drive down the cost of archival research, allowing an individual to capture far more documents per hour. So an archival visit becomes a process of standing over documents, snapping pictures as quickly as possible.

Alexis writes—“Different histories will be written because the tools of the discipline are changing.”

Alexis argues that with increasing dependence on digital records, historians lose some of the contexts by spending less time in the localities about which they are writing and thereby losing appreciation of locally produced expertise. This loss of context will change the perspectives with which history is documented.

Go to article


2. It's time for AI ethics to grow up

Stephanie Hare, writing for Wired:

The ethical challenges of artificial intelligence are well known. In 2020 we will realise that AI ethics will need to be codified in a realistic and enforceable way. Not doing so will present an existential threat to individuals, companies and society.

Despite being a highly experimental and often flawed technology, AI is already in widespread use. It is often used when we apply for a loan or a job. It is used to police our neighbourhoods, to scan our faces to check us against watchlists when we shop and walk around in public, to sentence us when we are brought before a judge and to conduct aspects of warfare. All this is happening without a legal framework to ensure that AI use is transparent, accountable and responsible. In 2020 we will realise that this must change.

Concerns about AI are not confined to civil-liberties and human-rights activists. London’s Metropolitan Police commissioner Cressida Dick has warned that the UK risks becoming a “ghastly, Orwellian, omniscient police state” and has called for law-enforcement agencies to engage with ethical dilemmas posed by AI and other technologies. Companies that make and sell facial-recognition technology, such as Microsoft, Google and Amazon, have repeatedly asked governments to pass laws governing its use – so far to little avail.

We are already late here considering how widespread the use of AI already is. And it’s a difficult question to answer. Ethics always have multiple gray areas. But better late than never and we need to start somewhere. It’s vital that governments and institutions around the world address this immediately.

Go to article


3. The secretive company that might end privacy as we know it

Kashmir Hill, writing for The New York Times:

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

Using this technology to find criminals is obviously a good use of this technology, but it can also be abused in very bad ways. What happens when authoritarian governments use it to identify activists?

Clearview has also sold its services to private companies as well. What if employees of those companies use it to stalk their exes or rogue governments use it to dig up secrets to blackmail people.

If this isn’t enough to concern you, one of the investors in Clearview AI is Peter Thiel, who is also behind Palantir, a data-mining company that built a predictive policing system, Minority Report-style.

Clearview has mined over 3 billion photos from social media sites and various public sources to create its system. Mining photos from sites like Facebook and Twitter are generally against their terms of service but rarely enforced. And with Peter Thiel serving on the board of Facebook, it’s unlikely any action will be taken.

This is why we need strong guidelines and laws for AI ethics and facial recognition in place, as soon as possible.

Go to article


4. New nerve-growing method could help injured soldiers and others

Karen Weintraub, writing for Scientific American:

A small injury to a nerve outside the brain and spinal cord is relatively easy to repair just by stretching it, but a major gap in such a peripheral nerve poses problems. Usually, another nerve is taken from elsewhere in the body, and it causes an extra injury and returns only limited movement.

Now researchers at the University of Pittsburgh have found an effective way to bridge such a gap—at least in mice and monkeys—by inserting a biodegradable tube that releases a protein called a growth factor for several months. In a study published Wednesday in Science Translational Medicine, the team showed that the tube works as a guide for the nerve to grow along the proper path, and the naturally occurring protein induces the nerve to grow faster.

This technique can potentially restore about 80% of nerve function. Although it still has to undergo clinical trials, it looks promising. And, since it doesn’t use stem cells, it will also be easier to get federal approval.

Go to article


5. IBM’s debating AI just got a lot closer to being a useful tool

Douglas Heaven, writing for MIT Technology Review:

We make decisions by weighing pros and cons. Artificial intelligence has the potential to help us with that by sifting through ever-increasing mounds of data. But to be truly useful, it needs to reason more like a human. “We make use of persuasive language and all sorts of background knowledge that is very difficult to model in AI,” says Jacky Visser of the Center for Argument Technology at the University of Dundee, UK. “This has been one of the holy grails since people started thinking about AI.”

A core technique used to help machines reason, known as argument mining, involves building software to analyze written documents and extract key sentences that provide evidence for or against a given claim. These can then be assembled into an argument. As well as helping us make better decisions, such tools could be used to catch fake news—undermining dodgy claims and backing up factual ones—or to filter online search results, returning relevant statements rather than whole documents.

IBM has just taken a big step in that direction. The company’s Project Debater team has spent several years developing an AI that can build arguments. Last year IBM demonstrated its work-in-progress technology in a live debate against a world-champion human debater, the equivalent of Watson’s Jeopardy! showdown. Such stunts are fun, and it provided a proof of concept. Now IBM is turning its toy into a genuinely useful tool.

IBM tested this in Lugano, Switzerland to collect 3,500 opinions from citizens asking them if the city should invest in autonomous vehicles. They used the AI to assess the arguments both for and against the proposal. The results could help local officials make an informed decision. 

Go to article


🗒 10 more incredible articles from around the web

  1. These scientists think we could all live in gigantic mushroom buildings [Science Alert]

  2. Jeff Bezos’s phone may have been hacked by the Saudi crown prince [The Guardian]

  3. Apple dropped a plan to encrypt backups after the FBI complained [Reuters]

  4. Report: India will force WhatsApp, Telegram to trace sensitive messages back to their originators [The Next Web]

  5. Deepfakes: A threat to democracy or just a bit of fun? [BBC]

  6. Apple and Google’s tough new location privacy controls are working [Fast Company]

  7. Internet use reduces study skills in university students [ScienceDaily]

  8. Apple’s anti-tracker feature in Safari did just the opposite, say Google researchers [Fast Company]

  9. I have nothing to hide, so why should I care about privacy? [IOT for all]

  10. Your fitness tracker may be cheating you out of credit you deserve [Fast Company]


💬 Quote of the week

In light of all this talk of facial recognition and ethics in AI, this quote by Larry Niven, an American science fiction author, seemed particularly apt:

Ethics change with technology.
—Larry Niven

I wish you a brilliant day as always :)

Neeraj

Share this post

The end of privacy? / Humans + Tech #13

www.humansplustech.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 HumansPlusTech.com
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing