Toggle light / dark theme

Every time a human or machine learns how to get better at a task, a trail of evidence is left behind. A sequence of physical changes — to cells in a brain or to numerical values in an algorithm — underlie the improved performance. But how the system figures out exactly what changes to make is no small feat. It’s called the credit assignment problem, in which a brain or artificial intelligence system must pinpoint which pieces in its pipeline are responsible for errors and then make the necessary changes. Put more simply: It’s a blame game to find who’s at fault.

AI engineers solved the credit assignment problem for machines with a powerful algorithm called backpropagation, popularized in 1986 with the work of Geoffrey Hinton, David Rumelhart and Ronald Williams. It’s now the workhorse that powers learning in the most successful AI systems, known as deep neural networks, which have hidden layers of artificial “neurons” between their input and output layers. And now, in a paper published in Nature Neuroscience in May, scientists may finally have found an equivalent for living brains that could work in real time.

A team of researchers led by Richard Naud of the University of Ottawa and Blake Richards of McGill University and the Mila AI Institute in Quebec revealed a new model of the brain’s learning algorithm that can mimic the backpropagation process. It appears so realistic that experimental neuroscientists have taken notice and are now interested in studying real neurons to find out whether the brain is actually doing it.

It’s actually about a company called Varda Space Industries.


After signing a deal with Varda Space Industries, Elon Musk’s next plan could be to revolutionize manufacturing in space. Stay tuned for the latest SpaceX news and subscribe to Futurity.

#spacex #elonMusk #space.

Here at Futurity, we scour the globe for all the latest tech releases, news and info just so you don’t have to! Covering everything from cryptocurrency to robotics, small startups to multinational corporations like Tesla and Jeff Bezos to Elon Musk and everything in between!

It even grasps common reasoning.

Nvidia and Microsoft revealed their largest and most powerful monolithic transformer language model trained to date: Megatron-Turing Natural Language Generation (MT-NLG), complete with a staggering 530 billion parameters built together, according to a press release.

MT-NLG outperforms prior transformer-based systems by both companies. MT-NLG is substantially larger and more complex than Microsoft’s Turing-NLG model and Nvidia’s Megatron-LM, with three times as many parameters spread across 105 layers.

And this is not the only industry that AI is working its magic on.

Online shopping platforms and online stores have already been employing smart techniques to thrive. From aggressive advertising to strategic partnerships, they have been highly successful in attracting customers and growing their profits. It is also not unexpected that many businesses are already using artificial intelligence in various forms.

Based on numbers from Statista, the global AI market in the retail industry was valued at $3.9 billion in 2020 and is expected to grow to $5.06 billion in 2021 and $6.55 billion in 2022. It is projected to accelerate its growth to become a $23.32 billion market by 2027.

There are a lot of movies and TV shows that depict a mass control takeover of self-driving cars.

This seems to be on our minds.

For quite good reasons.

If a malicious evildoer was somehow able to take command of Autonomous Vehicles (AVs) such as self-driving cars, the outcome could be disastrous. This almost goes without saying. The usual portrayal in films is that the villain opts to have cars crash into each other. Well, that’s just for starters. The self-driving cars are rammed into anything that isn’t nailed down, and by gosh also steer into and collide with objects that are ostensibly nailed down too.

Is artificial intelligence the next evolution for humans? Let’s dive into the latest news and updates with Neuralink in 2021 and how they will blur the line between human and robot. Last video: The Real Reason Tesla Overcame The Chip Shortage Crisis!
https://youtu.be/mRNm00NQI-c► Subscribe to our sister channel, The Space Race: https://www.youtube.com/channel/UCeMcDx6-rOq_RlKSPehk2tQ
► Subscribe to The Tesla Space newsletter: https://www.theteslaspace.com.
► Get up to $250 in Digital Currency With BlockFi: https://blockfi.com/theteslaspace.
►You can use my referral link to get 1,500 free Supercharger km on a new Tesla:
https://ts.la/trevor61038Subscribe: https://www.youtube.com/channel/UCJjAIBWeY022ZNj_Cp_6wAw?sub…1🚘 Tesla Videos: https://www.youtube.com/watch?v=L9soIzuucBk&list=PL_4hhCaBkh…q🚀 SpaceX Videos: https://www.youtube.com/playlist?list=PL_4hhCaBkhkJAOfZsqUnK…1👽 Elon Musk Videos: https://www.youtube.com/watch?v=iowsm7leEKo&list=PL_4hhCaBkh…N🚘 Tesla 🚀 SpaceX 👽 Elon Musk Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We’ll be showing you all of the new details around the Tesla Model 3 2021, Tesla Model Y 2,021 along with the Tesla Cybertruck when it finally arrives, it’s already ordered!Instagram: https://www.instagram.com/TheTeslaSpace.
Twitter: https://twitter.com/TheTeslaSpaceBusiness Email: [email protected] can use my referral link to get 1,500 free Supercharger km on a new Tesla:
https://ts.la/trevor61038#Tesla #TheTeslaSpace #Stocks

(2021). Nuclear Technology: Vol. 207 No. 8 pp. 1163–1181.


Focusing on nuclear engineering applications, the nation’s leading cybersecurity programs are focused on developing digital solutions to support reactor control for both on-site and remote operation. Many of the advanced reactor technologies currently under development by the nuclear industry, such as small modular reactors, microreactors, etc., require secure architectures for instrumentation, control, modeling, and simulation in order to meet their goals. 1 Thus, there is a strong need to develop communication solutions to enable secure function of advanced control strategies and to allow for an expanded use of data for operational decision making. This is important not only to avoid malicious attack scenarios focused on inflicting physical damage but also covert attacks designed to introduce minor process manipulation for economic gain. 2

These high-level goals necessitate many important functionalities, e.g., developing measures of trustworthiness of the code and simulation results against unauthorized access; developing measures of scientific confidence in the simulation results by carefully propagating and identifying dominant sources of uncertainties and by early detection of software crashes; and developing strategies to minimize the computational resources in terms of memory usage, storage requirements, and CPU time. By introducing these functionalities, the computers are subservient to the programmers. The existing predictive modeling philosophy has generally been reliant on the ability of the programmer to detect intrusion via specific instructions to tell the computer how to detect intrusion, keep log files to track code changes, limit access via perimeter defenses to ensure no unauthorized access, etc.

The last decade has witnessed a huge and impressive development of artificial intelligence (AI) algorithms in many scientific disciplines, which have promoted many computational scientists to explore how they can be embedded into predictive modeling applications. The reality, however, is that AI, premised since its inception on emulating human intelligence, is still very far from realizing its goal. Any human-emulating intelligence must be able to achieve two key tasks: the ability to store experiences and the ability to recall and process these experiences at will. Many of the existing AI advances have primarily focused on the latter goal and have accomplished efficient and intelligent data processing. Researchers on adversarial AI have shown over the past decade that any AI technique could be misled if presented with the wrong data. 3 Hence, this paper focuses on introducing a novel predictive paradigm, referred to as covert cognizance, or C2 for short, designed to enable predictive models to develop a secure incorruptible memory of their execution, representing the first key requirement for a human-emulating intelligence. This memory, or self-cognizance, is key for a predictive model to be effective and resilient in both adversarial and nonadversarial settings. In our context, “memory” does not imply the dynamic or static memory allocated for a software execution; instead, it is a collective record of all its execution characteristics, including run-time information, the output generated in each run, the local variables rendered by each subroutine, etc.

(2021). Nuclear Science and Engineering: Vol. 195 No. 9 pp. 977–989.


Earlier work has demonstrated the theoretical development of covert OT defenses and their application to representative control problems in a nuclear reactor. Given their ability to store information in the system nonobservable space using one-time-pad randomization techniques, the new C2 modeling paradigm6 has emerged allowing the system to build memory or self-awareness about its past and current state. The idea is to store information using randomized mathematical operators about one system subcomponent, e.g., the reactor core inlet and exit temperature, into the nonobservable space of another subcomponent, e.g., the water level in a steam generator, creating an incorruptible record of the system state. If the attackers attempt to falsify the sensor data in an attempt to send the system along an undesirable trajectory, they will have to learn all the inserted signatures across the various system subcomponents and the C2 embedding process.

We posit that this is extremely unlikely given the huge size of the nonobservable space for most complex systems, and the use of randomized techniques for signature insertion, rendering a level of security that matches the Vernam-Cipher gold standard. The Vernam Cipher, commonly known as a one-time pad, is a cipher that encrypts a message using a random key (pad) and can only be decrypted using this key. Its strength is derived from Shannon’s notion of perfect secrecy 8 and requires the key to be truly random and nonreusable (one time). To demonstrate this, this paper will validate the implementation of C2 using sophisticated AI tools such as long short-term memory (LSTM) neural networks 9 and the generative adversarial learning [generative adversarial networks (GANs)] framework, 10 both using a supervised learning setting, i.e., by assuming that the AI training phase can distinguish between original data and the data containing the embedded signatures. While this is an unlikely scenario, it is assumed to demonstrate the resilience of the C2 signatures to discovery by AI techniques.

The paper is organized as follows. Section II provides a brief summary of existing passive and active OT defenses against various types of data deception attacks, followed by an overview of the C2 modeling paradigm in Sec. III. Section IV formulates the problem statement of the C2 implementation in a generalized control system and identifies the key criteria of zero impact and zero observability. Section V implements a rendition of the C2 approach in a representative nuclear reactor model and highlights the goal of the paper, i.e., to validate the implementation using sophisticated AI tools. It also provides a rationale behind the chosen AI framework. Last, Sec. VI summarizes the validation results of the C2 implementation and discusses several extensions to the work.