Toggle light / dark theme

It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results.

The scenario isn’t fictional; it happened in 2,010 when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics—based on artificial intelligence—that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves.

Purdue University’s Hany Abdel-Khalik has come up with a powerful response: To make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response.

There are 40.3 million victims of human trafficking globally, according to the International Labor Organization. Marinus Analytics, a startup based in Pittsburgh, Pennsylvania, hopes to make a dent in that number. The company’s mission is to “serve those working on the frontlines of public safety by developing technology for them to disrupt human trafficking, child abuse, and cyber fraud.” For its achievements, Marinus won $500,000 as part of its third-place ranking in the 2021 IBM Watson AI XPRIZE competition. The startup is the brainchild of three co-founders: Cara Jones, Emily Kennedy, and Artur Dubrawski, who launched it out of the Robotics Institute at Carnegie Mellon University in 2014.

Marinus implements its mission primarily through its set of AI-based tools called Traffic Jam, whose goal is “to find missing persons, stop human trafficking and fight organized crime.”

Traditionally, finding a missing person would involve taping a picture of the person on the computer and then manually combing through thousands, if not millions, of online ads on adult services websites to see if any of the posted pictures match. Such a process is time-consuming and tiring. A human detective’s attention can start flagging after long hours at the computer doing the same task endlessly.

Cloud-based content management provider Box has announced a new “deep scan” functionality that checks files as they are uploaded to identify sophisticated malware and avert attacks.

The new capabilities constitute part of Box Shield, which uses machine learning to prevent data leaks, detect threats, and spot any kind of abnormal behavior. In April of last year, Box added a slew of automated malware detection features to the mix, allowing Box Shield customers to spot malicious content that may already have been uploaded to a Box account. However, so far this has leaned heavily on “known” threats from external intelligence databases. Moving forward, Box said it will mesh deep learning technology with external threat intelligence capabilities to analyze files for malicious scripts, macros, and executables to protect companies from zero-day (unknown) vulnerabilities.

When a user uploads an infected file, Box will quarantine it for inspection but will still allow the user to view a preview of the file and continue working.

These attacks were perpetrated by a newly discovered Iranian state sponsored threat group — dubbed MalKamak — that has been operating under the radar since at least 2018.

This operation has been ongoing for years, continuously evolving its malware year after year, while successfully evading most security tools. The authors of ShellClient invested a lot of effort into making it stealthy to evade detection by antivirus and other security tools by leveraging multiple obfuscation techniques and recently implementing a Dropbox client for command and control (C2), making it very hard to detect. By studying the ShellClient development cycles, Cybereason researchers were able to observe how ShellClient has morphed over time from a rather simple reverse shell to a sophisticated RAT used to facilitate cyber espionage operations.

The most recent ShellClient versions observed in Operation GhostShell follow the trend of abusing cloud-based storage services — in this case, the popular Dropbox service. The ShellClient authors used Dropbox to exfiltrate the stolen data and send commands to the malware. Threat actors have increasingly adopted this tactic due to its simplicity and the ability to effectively blend in with legitimate network traffic. Ultimately, this discovery tells researchers a lot about the tactics that advanced attackers are using to defeat security solutions.

The cybersecurity world is evolving rapidly — perhaps more quickly than at any other time in its history. It would be easy to attribute the cyber hiccups that many businesses face to the fact that they are simply unable to keep up with bad actors.

The facts are more complicated. While it’s true that new threats are emerging every day, more often than not, breaches result from long-standing organizational issues, not a sudden upturn in the ingenuity of cybercriminals.

Full Story:

The transaction-based communications system ensures robot teams achieve their goal even if some robots are hacked.

Imagine a team of autonomous drones equipped with advanced sensing equipment, searching for smoke as they fly high above the Sierra Nevada mountains. Once they spot a wildfire, these leader robots relay directions to a swarm of firefighting drones that speed to the site of the blaze.

But what would happen if one or more leader robots was hacked by a malicious agent and began sending incorrect directions? As follower robots are led farther from the fire, how would they know they had been duped?

The use of blockchain technology as a communication tool for a team of robots could provide security and safeguard against deception, according to a study by researchers at MIT and Polytechnic University of Madrid, which was published today in IEEE Transactions on Robotics. The research may also have applications in cities where multirobot systems of self-driving cars are delivering goods and moving people across town.