Menu

Blog

Archive for the ‘privacy’ category: Page 5

Nov 10, 2022

AI Researchers At Mayo Clinic Introduce A Machine Learning-Based Method For Leveraging Diffusion Models To Construct A Multitask Brain Tumor Inpainting Algorithm

Posted by in categories: biotech/medical, information science, privacy, robotics/AI

The number of AI and, in particular, machine learning (ML) publications related to medical imaging has increased dramatically in recent years. A current PubMed search using the Mesh keywords “artificial intelligence” and “radiology” yielded 5,369 papers in 2021, more than five times the results found in 2011. ML models are constantly being developed to improve healthcare efficiency and outcomes, from classification to semantic segmentation, object detection, and image generation. Numerous published reports in diagnostic radiology, for example, indicate that ML models have the capability to perform as good as or even better than medical experts in specific tasks, such as anomaly detection and pathology screening.

It is thus undeniable that, when used correctly, AI can assist radiologists and drastically reduce their labor. Despite the growing interest in developing ML models for medical imaging, significant challenges can limit such models’ practical applications or even predispose them to substantial bias. Data scarcity and data imbalance are two of these challenges. On the one hand, medical imaging datasets are frequently much more minor than natural photograph datasets such as ImageNet, and pooling institutional datasets or making them public may be impossible due to patient privacy concerns. On the other hand, even the medical imaging datasets that data scientists have access to could be more balanced.

In other words, the volume of medical imaging data for patients with specific pathologies is significantly lower than for patients with common pathologies or healthy people. Using insufficiently large or imbalanced datasets to train or evaluate a machine learning model may result in systemic biases in model performance. Synthetic image generation is one of the primary strategies to combat data scarcity and data imbalance, in addition to the public release of deidentified medical imaging datasets and the endorsement of strategies such as federated learning, enabling machine learning (ML) model development on multi-institutional datasets without data sharing.

Nov 7, 2022

Quantum Cryptography Is Unbreakable. So Is Human Ingenuity

Posted by in categories: business, computing, encryption, government, internet, mathematics, privacy, quantum physics, security

face_with_colon_three circa 2016.


Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighborhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Continue reading “Quantum Cryptography Is Unbreakable. So Is Human Ingenuity” »

Oct 24, 2022

The Future of Voice Assistants

Posted by in categories: privacy, robotics/AI

Latest breakthroughs in AI, voice biometrics, and speech recognition are paving the way to the exciting future of intuitive voice interaction with smart technologies!

Oct 5, 2022

FBI, CISA, and NSA Reveal How Hackers Targeted a Defense Industrial Base Organization

Posted by in category: privacy

FBI, CISA and NSA have disclosed information on how multiple nation-state hacker groups targeted the network of a Defense Industrial Base.

Sep 26, 2022

New report offers blueprint for regulation of facial recognition technology

Posted by in categories: law, privacy, robotics/AI, surveillance

A new report from the University of Technology Sydney (UTS) Human Technology Institute outlines a model law for facial recognition technology to protect against harmful use of this technology, but also foster innovation for public benefit.

Australian law was not drafted with widespread use of facial recognition in mind. Led by UTS Industry Professors Edward Santow and Nicholas Davis, the report recommends reform to modernize Australian law, especially to address threats to and other human rights.

Facial recognition and other remote biometric technologies have grown exponentially in recent years, raising concerns about the privacy, mass and unfairness experienced, especially by people of color and women, when the technology makes mistakes.

Sep 25, 2022

Brain-Computer Interfaces Could Raise Privacy Concerns

Posted by in categories: computing, neuroscience, privacy

Brain-computer interfaces may have a profound effect on people with limited mobility or other disabilities, but experts say they also introduce privacy issues that must be mitigated.

Aug 24, 2022

Algorithms can prevent online abuse

Posted by in categories: finance, information science, privacy, robotics/AI

Millions of children log into chat rooms every day to talk with other children. One of these “children” could well be a man pretending to be a 12-year-old girl with far more sinister intentions than having a chat about “My Little Pony” episodes.

Inventor and NTNU professor Patrick Bours at AiBA is working to prevent just this type of predatory behavior. AiBA, an AI-digital moderator that Bours helped found, can offer a tool based on behavioral biometrics and algorithms that detect sexual abusers in online chats with children.

And now, as recently reported by Dagens Næringsliv, a national financial newspaper, the company has raised capital of NOK 7.5. million, with investors including Firda and Wiski Capital, two Norwegian-based firms.

Aug 2, 2022

Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers

Posted by in categories: augmented reality, biotech/medical, internet, media & arts, privacy, robotics/AI, security, virtual reality

View insights.


In a paper distributed via ArXiv, titled “Exploring the Unprecedented Privacy Risks of the Metaverse,” boffins at UC Berkeley in the US and the Technical University of Munich in Germany play-tested an “escape room” virtual reality (VR) game to better understand just how much data a potential attacker could access. Through a 30-person study of VR usage, the researchers – Vivek Nair (UCB), Gonzalo Munilla Garrido (TUM), and Dawn Song (UCB) – created a framework for assessing and analyzing potential privacy threats. They identified more than 25 examples of private data attributes available to potential attackers, some of which would be difficult or impossible to obtain from traditional mobile or web applications. The metaverse that is rapidly becoming a part of our world has long been an essential part of the gaming community. Interaction-based games like Second Life, Pokemon Go, and Minecraft have existed as virtual social interaction platforms. The founder of Second Life, Philip Rosedale, and many other security experts have lately been vocal about Meta’s impact on data privacy. Since the core concept is similar, it is possible to determine the potential data privacy issues apparently within Meta.

There has been a buzz going around the tech market that by the end of 2022, the metaverse can revive the AR/VR device shipments and take it as high as 14.19 million units, compared to 9.86 million in 2021, indicating a year-over-year increase of about 35% to 36%. The AR/VR device market will witness an enormous boom in the market due to component shortages and the difficulty to develop new technologies. The growth momentum will also be driven by the increased demand for remote interactivity stemming from the pandemic. But what will happen when these VR or metaverse headsets start stealing your precious data? Not just headsets but smart glasses too are prime suspect when it comes to privacy concerns.

Continue reading “Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers” »

Jul 30, 2022

Detecting Deepfake Video Calls Through Monitor Illumination

Posted by in categories: privacy, security

A new collaboration between a researcher from the United States’ National Security Agency (NSA) and the University of California at Berkeley offers a novel method for detecting deepfake content in a live video context – by observing the effect of monitor lighting on the appearance of the person at the other end of the video call.

Jul 20, 2022

The FBI Forced A Suspect To Unlock Amazon’s Encrypted App Wickr With Their Face

Posted by in categories: encryption, government, law enforcement, mobile phones, privacy

In November last year, an undercover agent with the FBI was inside a group on Amazon-owned messaging app Wickr, with a name referencing young girls. The group was devoted to sharing child sexual abuse material (CSAM) within the protection of the encrypted app, which is also used by the U.S. government, journalists and activists for private communications. Encryption makes it almost impossible for law enforcement to intercept messages sent over Wickr, but this agent had found a way to infiltrate the chat, where they could start piecing together who was sharing the material.

As part of the investigation into the members of this Wickr group, the FBI used a previously unreported search warrant method to force one member to unlock the encrypted messaging app using his face. The FBI has previously forced users to unlock an iPhone with Face ID, but this search warrant, obtained by Forbes, represents the first known public record of a U.S. law enforcement agency getting a judge’s permission to unlock an encrypted messaging app with someone’s biometrics.

According to the warrant, the FBI first tracked down the suspect by sending a request for information, via an unnamed foreign law enforcement partner, to the cloud storage provider hosting the illegal images. That gave them the Gmail address the FBI said belonged to Christopher Terry, a 53-year-old Knoxville, Tennessee resident, who had prior convictions for possession of child exploitation material. It also provided IP addresses used to create the links to the CSAM. From there, investigators asked Google and Comcast via administrative subpoenas (data requests that don’t have the same level of legal requirements as search warrants) for more identifying information that helped them track down Terry and raid his home.

Page 5 of 30First23456789Last