Toggle light / dark theme

Researchers used electromagnetic signals to steal and replicate AI models from a Google Edge TPU with 99.91% accuracy, exposing significant vulnerabilities in AI systems and calling for urgent protective measures.

Researchers have shown that it’s possible to steal an artificial intelligence (AI) model without directly hacking the device it runs on. This innovative technique requires no prior knowledge of the software or architecture supporting the AI, making it a significant advancement in model extraction methods.

“AI models are valuable, we don’t want people to steal them,” says Aydin Aysu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Building a model is expensive and requires significant computing sources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks – because third parties can study the model and identify any weaknesses.”

By November 2024, 15 U.S. states had established regulations on ghost guns, though exact requirements vary. The rules typically require a serial number, background checks for firearm component purchases and reporting to authorities that a person is producing 3D-printed guns.

For instance, in New Jersey, a 2019 law mandates that all ghost guns have a serial number and be registered. Under current New York law, possession or distribution of a 3D-printed gun is classified as a misdemeanor. However, a proposed law seeks to elevate the manufacturing of firearms using 3D-printing technology to a felony offense.

As technology advances and rules evolve, criminals who use 3D-printed firearms will continue to pose threats to public safety and security, and governments will continue playing catch-up to effectively regulate these weapons.

The research team, led by physics professor Nuh Gedik, concentrated on a material called FePS₃, a type of antiferromagnet that transitions to a non-magnetic state at around −247°F. They hypothesized that precisely exciting the vibrations of FePS₃’s atoms with lasers could disrupt its typical antiferromagnetic alignment and induce a new magnetic state.

In conventional magnets (ferromagnets), all atomic spins align in the same direction, making their magnetic field easy to control. In contrast, antiferromagnets have a more complex up-down-up-down spin pattern that cancels out, resulting in zero net magnetization. While this property makes antiferromagnets highly resistant to stray magnetic influences – an advantage for secure data storage – it also creates challenges in intentionally switching them between “0” and “1” states for computing.

Gedik’s innovative laser-driven approach seeks to overcome this obstacle, potentially unlocking antiferromagnets for future high-performance memory and computational technologies.

Run by the team at orchestration, AI, and automation platform Tines, the Tines library contains pre-built workflows shared by real security practitioners from across the community, all of which are free to import and deploy via the Community Edition of the platform.

Their bi-annual “You Did What with Tines?!” competition highlights some of the most interesting workflows submitted by their users, many of which demonstrate practical applications of large language models (LLMs) to address complex challenges in security operations.

One recent winner is a workflow designed to automate CrowdStrike RFM reporting. Developed by Tom Power, a security analyst at The University of British Columbia, it uses orchestration, AI and automation to reduce the time spent on manual reporting.

The outgoing head of the US Department of Homeland Security believes Europe’s “adversarial” relationship with tech companies is hampering a global approach to regulating artificial intelligence that could result in security vulnerabilities.

Alejandro Mayorkas told the Financial Times the US — home of the world’s top artificial intelligence groups, including OpenAI and Google — and Europe are not on a “strong footing” because of a difference in regulatory approach.

He stressed the need for “harmonisation across the Atlantic”, expressing concern that relationships between governments and the tech industry are “more adversarial” in Europe than in the US.

As many as 296,000 Prometheus Node Exporter instances and 40,300 Prometheus servers have been estimated to be publicly accessible over the internet, making them a huge attack surface that could put data and services at risk.

The fact that sensitive information, such as credentials, passwords, authentication tokens, and API keys, could be leaked through internet-exposed Prometheus servers has been documented previously by JFrog in 2021 and Sysdig in 2022.

“Unauthenticated Prometheus servers enable direct querying of internal data, potentially exposing secrets that attackers can exploit to gain an initial foothold in various organizations,” the researchers said.

A team of Rice University scientists has solved a long-standing problem in thermal imaging, making it possible to capture clear images of objects through hot windows. Imaging applications in a range of fields—such as security, surveillance, industrial research and diagnostics—could benefit from the research findings, which were reported in the journal Communications Engineering.

“Say you want to use to monitor in a high-temperature reactor chamber,” said Gururaj Naik, an associate professor of electrical and computer engineering at Rice and corresponding author on the study. “The problem you’d be facing is that the thermal radiation emitted by the window itself overwhelms the camera, obscuring the view of objects on the other side.”

A possible solution could involve coating the window in a material that suppresses thermal light emission toward the camera, but this would also render the window opaque. To get around this issue, the researchers developed a coating that relies on an engineered asymmetry to filter out the thermal noise of a hot window, doubling the contrast of thermal imaging compared to conventional methods.

A vulnerability in WPForms, a WordPress plugin used in over 6 million websites, could allow subscriber-level users to issue arbitrary Stripe refunds or cancel subscriptions.

Tracked under CVE-2024–11205, the flaw was categorized as a high-severity problem due to the authentication prerequisite. However, given that membership systems are available on most sites, exploitation may be fairly easy in most cases.

The issue impacts WPForms from version 1.8.4 and up to 1.9.2.1, with a patch pushed in version 1.9.2.2, released last month.

Conversely, proprietary LLMs typically offer robust security features but still pose data privacy and control risks. Using these models involves sharing sensitive data with a third-party provider, which could lead to regulatory penalties if a breach occurs.

LLMs also lack transparency regarding their training data and how datasets are formed. Be mindful of potential bias and fairness issues and consider a human-in-the-loop approach, where specialists review and manage the model’s output.

LLMs are most effective when used to streamline complex processes and drive innovation. To leverage these models responsibly, prioritize data governance—especially in highly regulated industries.