Toggle light / dark theme

(2021). Nuclear Technology: Vol. 207 No. 8 pp. 1163–1181.


Focusing on nuclear engineering applications, the nation’s leading cybersecurity programs are focused on developing digital solutions to support reactor control for both on-site and remote operation. Many of the advanced reactor technologies currently under development by the nuclear industry, such as small modular reactors, microreactors, etc., require secure architectures for instrumentation, control, modeling, and simulation in order to meet their goals. 1 Thus, there is a strong need to develop communication solutions to enable secure function of advanced control strategies and to allow for an expanded use of data for operational decision making. This is important not only to avoid malicious attack scenarios focused on inflicting physical damage but also covert attacks designed to introduce minor process manipulation for economic gain. 2

These high-level goals necessitate many important functionalities, e.g., developing measures of trustworthiness of the code and simulation results against unauthorized access; developing measures of scientific confidence in the simulation results by carefully propagating and identifying dominant sources of uncertainties and by early detection of software crashes; and developing strategies to minimize the computational resources in terms of memory usage, storage requirements, and CPU time. By introducing these functionalities, the computers are subservient to the programmers. The existing predictive modeling philosophy has generally been reliant on the ability of the programmer to detect intrusion via specific instructions to tell the computer how to detect intrusion, keep log files to track code changes, limit access via perimeter defenses to ensure no unauthorized access, etc.

The last decade has witnessed a huge and impressive development of artificial intelligence (AI) algorithms in many scientific disciplines, which have promoted many computational scientists to explore how they can be embedded into predictive modeling applications. The reality, however, is that AI, premised since its inception on emulating human intelligence, is still very far from realizing its goal. Any human-emulating intelligence must be able to achieve two key tasks: the ability to store experiences and the ability to recall and process these experiences at will. Many of the existing AI advances have primarily focused on the latter goal and have accomplished efficient and intelligent data processing. Researchers on adversarial AI have shown over the past decade that any AI technique could be misled if presented with the wrong data. 3 Hence, this paper focuses on introducing a novel predictive paradigm, referred to as covert cognizance, or C2 for short, designed to enable predictive models to develop a secure incorruptible memory of their execution, representing the first key requirement for a human-emulating intelligence. This memory, or self-cognizance, is key for a predictive model to be effective and resilient in both adversarial and nonadversarial settings. In our context, “memory” does not imply the dynamic or static memory allocated for a software execution; instead, it is a collective record of all its execution characteristics, including run-time information, the output generated in each run, the local variables rendered by each subroutine, etc.

Cybereason, a Tel Aviv-and Boston, Massachusetts-based cybersecurity company providing endpoint prevention, detection, and response, has secured a $50 million investment from Google Cloud, VentureBeat has learned. It extends the series F round that Cybereason announced in July from $275 million to $325 million, making Cybereason one of the best-funded startups in the cybersecurity industry with over $713 million in the capital.

We reached out to a Google Cloud spokesperson, but they didn’t respond by press time.

The infusion of cash comes after Cybereason and Google Cloud entered into a strategic partnership to bring to market a platform — Cybereason XDR, powered by Chronicle — that can ingest and analyze “petabyte-scale” telemetry from endpoints, networks, containers, apps, profiles, and cloud infrastructure. Combining technology from Cybereason, Google Cloud, and Chronicle, the platform scans more than 23 trillion security-related events per week and applies AI to help reveal, mitigate, and predict cyberattacks correlated across devices, users, apps, and cloud deployments.

As many as 130 different ransomware families have been found to be active in 2020 and the first half of 2,021 with Israel, South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran, and the U.K. emerging as the most affected territories, a comprehensive analysis of 80 million ransomware-related samples has revealed.

Google’s cybersecurity arm VirusTotal attributed a significant chunk of the activity to the GandCrab ransomware-as-a-service (RaaS) group (78.5%), followed by Babuk (7.61%), Cerber (3.11%), Matsnu (2.63%), Wannacry (2.41%), Congur (1.52%), Locky (1.29%), Teslacrypt (1.12%), Rkor (1.11%), and Reveon (0.70%).

The U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) has identified roughly $5.2 billion worth of outgoing Bitcoin transactions likely tied to the top 10 most commonly reported ransomware variants.

FinCEN identified 177 CVC (convertible virtual currency) wallet addresses used for ransomware-related payments after analyzing 2,184 SARs (Suspicious Activity Reports) filed between January 1 2011, and June 30 2021, and reflecting $1.56 billion in suspicious activity.

Based on blockchain analysis of transactions tied to the 177 CVC wallets, FinCEN identified roughly $5.2 billion in outgoing BTC transactions potentially tied to ransomware payments.

It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results.

The scenario isn’t fictional; it happened in 2,010 when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics—based on artificial intelligence—that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves.

Purdue University’s Hany Abdel-Khalik has come up with a powerful response: To make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response.

There are 40.3 million victims of human trafficking globally, according to the International Labor Organization. Marinus Analytics, a startup based in Pittsburgh, Pennsylvania, hopes to make a dent in that number. The company’s mission is to “serve those working on the frontlines of public safety by developing technology for them to disrupt human trafficking, child abuse, and cyber fraud.” For its achievements, Marinus won $500,000 as part of its third-place ranking in the 2021 IBM Watson AI XPRIZE competition. The startup is the brainchild of three co-founders: Cara Jones, Emily Kennedy, and Artur Dubrawski, who launched it out of the Robotics Institute at Carnegie Mellon University in 2014.

Marinus implements its mission primarily through its set of AI-based tools called Traffic Jam, whose goal is “to find missing persons, stop human trafficking and fight organized crime.”

Traditionally, finding a missing person would involve taping a picture of the person on the computer and then manually combing through thousands, if not millions, of online ads on adult services websites to see if any of the posted pictures match. Such a process is time-consuming and tiring. A human detective’s attention can start flagging after long hours at the computer doing the same task endlessly.