A Tesla driver can now unlock his car without using his smartphone. Thanks to a chip implanted in his hand, he will never lose his keys again.
Category: mobile phones – Page 83
⭕ Watch the full episode on EpochTV 👉https://ept.ms/Cyberattack_FULL
🔵 SURVEY: Your View on the FBI Raid of Trump👉https://ept.ms/3dsTLUU
🔔 A Documentary by The Epoch Times, reveals the truth that has been hidden from the American people.👉https://ept.ms/3cTR1zF
🔵 Enjoy 50% OFF 👉 https://ept.ms/3OAQfFI
Smartphones, tablets, computer screens — all digital media has detrimental effects on your brain. That is a position that Professor Manfred Spitzer, a neuroscientist and author of several books, defends. You might like what you’ll hear, you might not, but don’t say that you haven’t been warned. Especially if you have kids running around with smartphones all day long.
Created by Rimantas Vančys.
Video footage and graphics: Envato Elements.
Additional material: NASA.
Music: Envato Elements.
For more cool science visit:
• Website: https://www.scienceandcocktails.org.
• Facebook: https://www.facebook.com/scienceandcocktailscph/
• Youtube: https://www.youtube.com/c/ScienceCocktails
Over the past decade, digital cameras have been widely adopted in various aspects of our society, and are being massively used in mobile phones, security surveillance, autonomous vehicles, and facial recognition. Through these cameras, enormous amounts of image data are being generated, which raises growing concerns about privacy protection.
Some existing methods address these concerns by applying algorithms to conceal sensitive information from the acquired images, such as image blurring or encryption. However, such methods still risk exposure of sensitive data because the raw images are already captured before they undergo digital processing to hide or encrypt the sensitive information. Also, the computation of these algorithms requires additional power consumption. Other efforts were also made to seek solutions to this problem by using customized cameras to downgrade the image quality so that identifiable information can be concealed. However, these approaches sacrifice the overall image quality for all the objects of interest, which is undesired, and they are still vulnerable to adversarial attacks to retrieve the sensitive information that is recorded.
A new research paper published in eLight demonstrated a new paradigm to achieve privacy-preserving imaging by building a fundamentally new type of imager designed by AI. In their paper, UCLA researchers, led by Professor Aydogan Ozcan, presented a smart camera design that images only certain types of desired objects, while instantaneously erasing other types of objects from its images without requiring any digital processing.
Apple is set to expand ads to new areas of your iPhone and iPad in search of its next big revenue driver. Also: The company slows its pace of acquiring startups, and Peloton embarks on a major overhaul.
Last week in Power On: Apple’s delay of iPadOS 16 and Stage Manager keeps the focus on the iPhone 14.
When Carnegie Mellon University doctoral candidates I-Hsuan Kao and Ryan Muzzio started working together a switch flicked on. Then off.
Working in the Department of Physics’ Lab for Investigating Quantum Materials, Interfaces and Devices (LIQUID) Group, Kao, Muzzio and other research partners were able to show proof of concept that running an electrical current through a novel two-dimensional material could control the magnetic state of a neighboring magnetic material without the need of applying an external magnetic field.
The groundbreaking work, which was published in Nature Materials in June and has a related patent pending, has potential applications for data storage in consumer products such as digital cameras, smartphones and laptops.
Researchers have observed the formation of 2D ice on gold surfaces that were thought to be too hydrophilic and too rough to support this type of ice.
Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology.
Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology.
The researchers found that most application programming interfaces that use facial liveness verification—a feature of facial recognition technology that uses computer vision to confirm the presence of a live user—don’t always detect digitally altered photos or videos of individuals made to look like a live version of someone else, also known as deepfakes. Applications that do use these detection measures are also significantly less effective at identifying deepfakes than what the app provider has claimed.
“In recent years we have observed significant development of facial authentication and verification technologies, which have been deployed in many security-critical applications,” said Ting Wang, associate professor of information sciences and technology and one principal investigator on the project. “Meanwhile, we have also seen substantial advances in deepfake technologies, making it fairly easy to synthesize live-looking facial images and video at little cost. We thus ask the interesting question: Is it possible for malicious attackers to misuse deepfakes to fool the facial verification systems?”
Samsung unveiled the Galaxy Z Fold 4 and Z Flip 4 alongside new Galaxy Buds and the Galaxy Watch 5.