My first online proctored exam / Humans + Tech - #64
+ This person does not exist + Leaked location data shows Muslim prayer app tracking users + This artificial heart will soon be on the market in Europe
Hi,
I did my first online proctored exam this week. I was not fond of the experience at all. The exam was 3 hours long. I had to submit a photo of myself, my ID, and four pictures of my desk from each side before being allowed to start. That part was ok.
I was then informed that once the exam started, I would not be able to leave the camera frame. I was not to speak. No one was to enter my room, and no voices should be heard in the room. I was also not allowed to take any photos of the exam. If any of these rules were violated, I would be disqualified immediately.
I am normally a very restless person. I rarely sit for more than 20-30 minutes at a time. I also avoid being in photos and videos as much as possible. Sitting down for 3 hours and making sure I was always in the camera frame while it was recording me was extremely difficult and stressful. I was even afraid to stretch or shift in my chair to adjust myself for fear that it would take me out of the camera frame.
Halfway through the test, the testing software shut down on its own and restarted. Luckily it was restored at the question I was on. Phew!
Suddenly a chat window came up, and a proctor told me in ALL CAPS not to leave the camera frame, talk, or take pictures of the screen. I replied back saying I was doing none of that. He replied and said, “not yet.” Then he thanked me for following the rules and wished me good luck. What? It totally threw my concentration in the exam off. They had already explained all the rules in the beginning. Why disrupt me in the middle of the exam and remind me of those rules?
All in all, it was a very stressful way to take an exam. In the future, I would choose an in-person test-taking option over this. I feel sorry for the school students and university students forced to do exams in this manner online.
In an article from April of 2020, many students said they found the whole online proctoring process very creepy [William Joel, The Verge]. Many of their experiences were worse than mine. Unfortunately, this practice started before the pandemic, and the pandemic has only accelerated its adoption.
On the bright side, I did pass :)
Onto this week’s articles.
This person does not exist
I enjoy reading Synapse, a newsletter by Clayton Mansel on neuroscience research. If you enjoy learning about the brain, I highly recommend subscribing to his newsletter.
This week he wrote about how easily our brains are fooled. In the article, he brings up various examples of how we miss the obvious, how we are blind to change, and how our eyes are easily tricked. But the part I wanted to highlight from his article is the part about Synthetic Media and Deep Fakes [Clayton Mansel, Synapse].
Recently… very recently… there have been advancements in Artificial Intelligence technologies that have gotten good enough to fool our senses. If you navigate over the thispersondoesnotexist.com and refresh, each time you will be presented with what looks like a portrait photograph of a random person.
Except that every picture is not actually of a real person and was instead synthetically generated by AI with perfect precision.
[…]
Soon, ‘fake news’ will mean more than biased or flawed journalism—it could mean the propagation of false video and audio that sounds indistinguishable from the real thing.
If you haven’t been convinced yet, here is an AI generated video of former president Barrack Obama:
We already have a huge problem with fake news and misinformation in all media forms — online and offline. In most cases, where deep fakes are not used, a little research can verify the information's authenticity. Once deep fakes are used to propagate false messages and false proof, how will it affect our societies?
Verifying the authenticity of fake videos can be very time-consuming. Most regular folks will not have the skill or the time to research these at the speed at which news travels via social media.
What will that do to our trust in news and information? Will we end up distrusting everything because we can’t trust anything? What happens when these deep fakes call for violence and people react without thinking about their authenticity? Whose responsibility is it going to be? And what happens to satire and comedy? There are some serious repercussions to consider.
Leaked location data shows Muslim prayer app tracking users
This week, Vice reports on Muslim prayer tracking apps that sell data of their users’ location data to third-party companies [Joseph Cox, Motherboard by Vice].
One user travelled through a park a few blocks south of an Islamic cultural center. Roughly every two minutes, their phone reported their physical location. Another was next to a bank two streets over from a different mosque. A third person was at a train station, again near a mosque.
Perhaps unbeknownst to these people, Salaat First (Prayer Times), an app that reminds Muslims when to pray, was recording and selling their granular location information to a data broker, which in turn sells location data to other clients. Motherboard has obtained a large dataset of those raw, precise movements of users of the app from a source. The source who provided the dataset was concerned that such sensitive information, which could potentially track Muslims going about their day including visiting places of worship, could be abused by those who buy and make use of the data. The company collecting the location data, a French firm called Predicio, has previously been linked to a supply chain of data involving a U.S. government contractor that worked with ICE, Customs and Border Protection, and the FBI.
This type of data harvesting and profiling is detrimental to society and can lead to false targeting, ethnic and religious profiling, and harassment of people.
Other apps have been found using Predicio’s software embedded in their applications to track their users’ locations, including Fu*** Weather and Weawow, both weather apps.
Senator Ron Wyden, whose office has been conducting its own investigation into the broader location data industry, told Motherboard in a statement that "Google and Apple took a good first step protecting Americans’ privacy when they banned the data broker X-Mode Social last year. But banning one company at a time will be an endless game of whack-a-mole. Google and Apple need to ban every one of these shady, deceptive data brokers from their app stores."
Free apps have to make money in some way. But harvesting and selling their users' data without their knowledge or explicit consent (not hidden deep in a privacy policy) should be banned. Apple has taken a step in the right direction by including privacy labels in its new iOS update that reveals what data each app has access to in an easy-to-understand format. Google’s Play Store still doesn’t have such a feature. Tellingly, Google themselves have not updated their iOS apps weeks after Apple introduced the privacy labels.
This artificial heart will soon be on the market in Europe
A French company called Carmat has developed an artificial heart that works like a real heart to regulate blood flow [Vanessa Bates Ramirez, Singularity Hub].
The artificial heart is made by a French company called Carmat, and is designed for people with end-stage biventricular heart failure. That’s when both of the heart’s ventricles—chambers near the bottom of the heart that pull in and push out blood between the lungs and the rest of the body—are too weak to carry out their function.
Like a real heart, the artificial heart has two ventricles. One is for hydraulic fluid and the other for blood, and a membrane separates the two. The blood-facing side of the membrane is made of tissue from a cow’s heart. A motorized pump moves hydraulic fluid in and out of the ventricles, and that fluid moves the membrane to let blood flow through. There are four “biological” valves, thus called because they’re also made from cow heart tissue.
Embedded electronics, microprocessors, and sensors automatically regulate responses to the patient’s activity; if, for example, they’re exercising, blood flow will increase, just as it would with a real heart.
One patient has lived with Carmat’s heart for two years. It’s not a permanent solution yet and serves more as a temporary solution while patients wait for matching heart donations for transplants. Carmat’s eventual goal is to create a device that can replace heart transplants.
Carmat received health and safety standards approval from the EU last year and plan to launch commercially this year.
For patients awaiting a heart transplant, this can give them a few additional years of life. And if Carmat can realize their goal of replacing heart transplants entirely, it could lead to a lot more people’s lives being saved as they won’t need to wait for a matching heart donor.
Other interesting articles from around the web
🇬🇧 Watch London’s cool, quirky augmented reality art exhibit at home [Singularity Hub]
Unreal City is an augmented reality art exhibit presented by Acute Art and Dazed Media. It took place along the Southbank of the River Thames, featuring 36 different “sculptures” that visitors could only see through the Acute Art app. Red buoys placed along the river walk indicated the locations of the digital artworks. Pointing their phones at the area around the buoys reveal the digital sculptures. You can also watch it from home by projecting the sculptures in your living room or garden.
🧠 Personalized brain stimulation could relieve a common mental health disorder [Sarah Wells, Inverse]
Brain stimulation to treat OCD (Obsessive Compulsive Disorder)? This team of neuroscientists have published a paper in Nature Medicine that explains their findings. They studied the treatment using 124 people and found causal evidence of the effectiveness of this treatment.
💨 AI is helping forecast the wind manage wind farms [John P. Desmond, AI Trends]
Google and its DeepMind AI subsidiary have combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central US. Machine learning has helped them predict the wind better, raising revenue by 20%.
💰 Shame, suicide and the dodgy loan apps plaguing Google’s Play Store [Varsha Bansal, WIRED]
The 28-year-old, who lived on the outskirts of the Indian city Hyderabad, had taken out Rs 70,000 ($956) in loans from at least 35 instant loan apps over six months after losing his job during the pandemic. As the date to repay each of his microloans inched closer, he started borrowing from one app to repay the others, but kept falling short. Each deadline piled on more stress. Things got so desperate that he secretly started using his wife Sri’s phone to borrow money.
Two days before his death, Sunil had landed a new job. He thought he would be able to pay off his debts and move onto the next phase of his life. But debt collectors from one of the apps installed on his wife’s phone accessed her contacts list, created a WhatsApp group, added her family members to it and started shaming her. Sri’s photo was posted in the group chat along with voice notes in the local language, Telugu, calling her a fraud who had failed to make loan payments. The aim was to humiliate her into paying. Sunil couldn’t stand his wife being shamed, family members say. “He couldn’t handle it, losing [his in-laws’] respect,” says Ganesh Kumar, Sunil’s brother-in-law. “It wasn’t his inability to pay that killed him. It was shame.”
Quote of the week
"Being tracked all day provides a lot of information, and it shouldn't be usable against you, especially if you are unaware of it."
—Anonymous, from the article, Leaked location data shows another Muslim prayer app tracking users [Motherboard by Vice]
I wish you a brilliant day ahead :)
Neeraj