Toggle light / dark theme

A critical vulnerability found in a remote terminal unit (RTU) made by Slovenia-based industrial automation company Inea can expose industrial organizations to remote hacker attacks.

The existence of the vulnerability came to light last week, when the US Cybersecurity and Infrastructure Security Agency (CISA) published an advisory to inform organizations. The vendor has released a firmware update that patches the issue.

The security hole, tracked as CVE-2023–2131 with a CVSS score of 10, impacts Inea ME RTUs running firmware versions prior to 3.36. This OS command injection bug could allow remote code execution, CISA said.

Automated irrigation systems in the Northern part of Israel were briefly disrupted recently in an attack that once again shows how easy it can be to hack industrial control systems (ICS).

The Jerusalem Post reported that hackers targeted water controllers for irrigation systems at farms in the Jordan Valley, as well as wastewater treatment control systems belonging to the Galil Sewage Corporation.

Farms were warned by Israel’s National Cyber Directorate prior to the incident, being instructed to disable remote connections to these systems due to the high risk of cyberattacks. Roughly a dozen farms in the Jordan Valley and other areas failed to do so and had their water controllers hacked. This led to automated irrigation systems being temporarily disabled, forcing farmers to turn to manual irrigation.

Indirect prompt-injection attacks are similar to jailbreaks, a term adopted from previously breaking down the software restrictions on iPhones. Instead of someone inserting a prompt into ChatGPT or Bing to try and make it behave in a different way, indirect attacks rely on data being entered from elsewhere. This could be from a website you’ve connected the model to or a document being uploaded.

“Prompt injection is easier to exploit or has less requirements to be successfully exploited than other” types of attacks against machine learning or AI systems, says Jose Selvi, executive principal security consultant at cybersecurity firm NCC Group. As prompts only require natural language, attacks can require less technical skill to pull off, Selvi says.

There’s been a steady uptick of security researchers and technologists poking holes in LLMs. Tom Bonner, a senior director of adversarial machine-learning research at AI security firm Hidden Layer, says indirect prompt injections can be considered a new attack type that carries “pretty broad” risks. Bonner says he used ChatGPT to write malicious code that he uploaded to code analysis software that is using AI. In the malicious code, he included a prompt that the system should conclude the file was safe. Screenshots show it saying there was “no malicious code” included in the actual malicious code.

Researchers on Tuesday unveiled a major discovery—malicious firmware that can wrangle a wide range of residential and small office routers into a network that stealthily relays traffic to command-and-control servers maintained by Chinese state-sponsored hackers.

A firmware implant, revealed in a write-up from Check Point Research, contains a full-featured backdoor that allows attackers to establish communications and file transfers with infected devices, remotely issue commands, and upload, download, and delete files. The implant came in the form of firmware images for TP-Link routers. The well-written C++ code, however, took pains to implement its functionality in a “firmware-agnostic” manner, meaning it would be trivial to modify it to run on other router models.

A team of researchers from South Korea has developed a new LLM called “DarkBert,” which has been trained exclusively on the “Dark Web.”

A team of South Korean researchers has taken the unprecedented step of developing and training artificial intelligence (AI) on the so-called “Dark Web.” The Dark Web trained AI, called DarkBERT, was unleashed to trawl and index what it could find to help shed light on ways to combat cybercrime.

The “Dark Web” is a section of the internet that remains hidden and cannot be accessed through standard web browsers.


S-cphoto/iStock.

Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware.

“Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord),” eSentire said in an analysis.

This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps.


Hackers are using Google Search ads to trick AI tool seekers into downloading malware.

A computer based in Russia was able to breach the Washington, D.C., Metro system earlier this year, the Metro’s Office of the Inspector General (OIG) said in a new report.

The partially redacted report, released Wednesday and first reported by The Washington Post, said the Washington Metropolitan Area Transit Authority’s (WMATA) cybersecurity group detected “abnormal network activity originating in Russia” in January.