Humans + Tech - Issue #4
Direct brain-to-brain communication, Scientists place humans in suspended animation, Gender bias in AI, Apps that are helping drug users with recovery, and Teaching kids about online privacy.
Scientists demonstrate direct brain-to-brain communication in humans
Robert Martone, writing for Scientific American:
In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner.
Overall, five groups of individuals were tested using this network, called the “BrainNet,” and, on average, they achieved greater than 80 percent accuracy in completing the task.
Sounds intriguing and very futuristic so far. But, there are other aspects to consider.
… the technology still raises ethical concerns, particularly because the associated technologies are advancing so rapidly. For example, could some future embodiment of a brain-to-brain network enable a sender to have a coercive effect on a receiver, altering the latter’s sense of agency? Could a brain recording from a sender contain information that might someday be extracted and infringe on that person’s privacy? Could these efforts, at some point, compromise an individual’s sense of personhood?
Being able to control the brains of others raises serious privacy issues, more than any other technology.
Would you be comfortable giving access to your brain to someone else?
These experiments use EEGs attached to each person’s head to aid the communication. Is it possible that this technology could advance enough to be done wirelessly in the future?
If so, how would we prevent others from accessing our brains?
Leave a comment and let me know your thoughts—I can’t read them until this technology has matured further 😉😳😱.
Scientists place humans in "suspended animation" for the first time
Victor Tangermann, writing for Futurism:
A team of doctors at the University of Maryland School of Medicine have placed humans in “suspended animation” for the first time as part of a trial that could enable health professionals to fix traumatic injuries such as a gunshot or stab wound that would otherwise end in death, according to a New Scientist exclusive.
Suspended animation — or “emergency preservation resuscitation,” in medical parlance — involves rapidly cooling a patient’s body down to ten to 15 degrees Celsius (50 to 59 Fahrenheit) by replacing their blood with an ice-cold salt solution.
These trials are only being done on people who have no other hope of survival with permission from the U.S. FDA.
This procedure slows brain activity down enough to give surgeons an extra couple of hours to save the patient.
The results of the trial are still unknown.
4 ways to address gender bias in AI
Josh Feast, writing for the Harvard Business Review:
There have been several high profile cases of gender bias, including computer vision systems for gender recognition that reported higher error rates for recognizing women, specifically those with darker skin tones. In order to produce technology that is more fair, there must be a concerted effort from researchers and machine learning teams across the industry to correct this imbalance. Fortunately, we are starting to see new work that looks at exactly how that can be accomplished.
Last week, we linked to an article in the New York Times, that talked about how AI systems learning our biases through the data that we feed them. Gender bias is just one of those biases.
Josh believes that we can overcome this by addressing the three causes of biases and suggests four best practices to be employed by machine learning teams to avoid gender bias. I also like his closing statements from the article:
We have an obligation to create technology that is effective and fair for everyone. I believe the benefits of AI will outweigh the risks if we can address them collectively. It’s up to all practitioners and leaders in the field to collaborate, research, and develop solutions that reduce bias in AI for all.
How healthcare apps are helping drug users transition from addiction to recovery
Dan Matthews, writing for The Next Web:
One of the difficulties facing recovering addicts is that once they’ve completed a course of structured rehabilitation, they often do not receive the vital comprehensive aftercare that is required to continue their healthy recovery. This is an issue that is made more difficult by already over-stretched resources in the medical community. But it shouldn’t be ignored that recovery is often a time of significant stress and vulnerability for patients. As a result, we have seen software geared toward supporting patients through recovery being introduced to the addiction treatment landscape.
One of the more comprehensive community-based approaches in the field is Sober Grid — a free app that was developed by addicts who had been searching for sober communities for support in their local area. The app connects those in recovery with other sober people in their community using geosocial networking to ensure that whether users are at home or traveling, they have access to addiction support resources. It also provides users with daily quests to reinforce their sober habits, a newsfeed space to share addiction-related information with the community, and 24/7 access to certified peer coaches.
It always makes me happy when technology can be employed in useful ways to help people. Doctors are now also using deep brain stimulation to treat opioid addiction, as a last resort.
We street-proof our kids. Why aren’t we data-proofing them?
Siobhan O’Flynn, writing for The Conversation
Google recently agreed to pay a US$170 million fine for illegally gathering children’s personal data on YouTube without parental consent, which is a violation under the Children’s Online Privacy Protection Act (COPPA).
The United States Federal Trade Commission and the New York State Attorney General — who together brought the case against Google — now require YouTube to obtain consent from parents before collecting or sharing personal information. In addition, creators of child-directed content must self-identify to restrict the delivery of targeted ads.
If you’re a parent, this article is a must-read!
Google and other big tech companies are using various dark patterns to bypass privacy regulations. Although this article talks about US and Canadian legislation, it could easily be applied to almost any country, since most of these tech companies operate globally.
We are talking about vast fields of aggregate data, the scale of which is difficult to comprehend; this data can be parsed by the artificial intelligence recommendation algorithms that Google has pioneered, and that now steer everything from employment application processes to dating apps.
Children in the United States and Canada have another significant, persistent arena where information is being produced by them and collected by Google. Google entered the educational sphere in 2012, and now dominates educational technology market in the U.S., giving Google unprecedented parent-sanctioned access to children’s data through kindergarten to Grade 12.
Let’s hope Oasis Labs’ platform can help combat this.
In the meantime, here are some tips for parents on raising privacy-savvy kids and teaching them about internet privacy.
Here’s a quote I’ve been thinking about:
“You can take all the pictures you want, but you can never relive the moment the same way.”
― Audrey Regan, editor of Business Statistics
With high-resolution cameras on most phones these days, people spend more time trying to capture the moment, rather than just enjoying it. Next time you notice something beautiful or funny, leave your phone in your pocket and just enjoy the moment. Your soul will thank you!
Wish you a brilliant day,
Neeraj
Psst … Hit the ♥️ below if you enjoyed this issue.