Toggle light / dark theme

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nuclear fusion researchers have created a machine learning AI algorithm to detect and track the existence of plasma blobs that build up inside the tokamak for prediction of plasma disruption, the diagnosis of plasma using spectroscopy and tomography, and the tracking of turbulence inside of the fusion reactor. New AI supercomputer with over 13.5 million processor cores and over 1 exaflop of compute power made be Cerebras. A new study reveals an innovative neuro-computational model of the human brain which could lead to the creation of conscious AI or artificial general intelligence (AGI).

AI News Timestamps:
0:00 Breakthrough AI Runs A Nuclear Fusion Reactor.
3:07 New AI Supercomputer.
6:19 New Brain Model For Conscious AI

#ai #ml #nuclear

This could help achieve even world peace ✌️ called equilibrium theory by John Nash.


Game theory mathematics is used to predict outcomes in conflict situations. Now it is being adapted through big data to resolve highly contentious issues between people and the environment.

Game theory is a mathematical concept that aims to predict outcomes and solutions to an issue in which parties with conflicting, overlapping or mixed interests interact.

In “theory,” the “game” will bring everyone towards an optimal solution or “equilibrium.” It promises a scientific approach to understanding how people make decisions and reach compromises in real-world situations.

Its why we should reverse engineer lab rat brains, crow brains, pigs, and chimps, ending on fully reverse engineering the human brain. even if its a hassle. i still think could all be done by end of 2025.


Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many , becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets—flexible, causal, robust, and explainable—but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training—while many traditional models are fixed.

Researchers solved a differential equation behind the interaction of two neurons through synapses, creating a faster AI algorithm.

Artificial intelligence uses a technique called artificial neural networks (ANN) to mimic the way a human brain works. A neural network uses input from datasets to “learn” and output its prediction based on the given information.

Full Story:


Last year, MIT developed an AI/ML algorithm capable of learning and adapting to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee sense) literally play 4D chess — their models requiring time-series data to operate — which makes them ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. But, the problem is that data throughput has become a bottleneck, and scaling these systems has become prohibitively expensive, computationally speaking.

On Tuesday, MIT researchers announced that they have devised a solution to that restriction, not by widening the data pipeline but by solving a differential equation that has stumped mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.”

“The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said in a Tuesday press statement. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

Physical dynamical processes can be modelled with differential equations that may be solved with numerical approaches, but this is computationally costly as the processes grow in complexity. In a new approach, dynamical processes are modelled with closed-form continuous-depth artificial neural networks. Improved efficiency in training and inference is demonstrated on various sequence modelling tasks including human action recognition and steering in autonomous driving.

For more than half a century, researchers around the world have been struggling with an algorithmic problem known as “the single source shortest path problem.” The problem is essentially about how to devise a mathematical recipe that best finds the shortest route between a node and all other nodes in a network, where there may be connections with negative weights.

Sound complicated? Possibly. But in fact, this type of calculation is already used in a wide range of the apps and technologies that we depend upon for finding our ways around—as Google Maps guides us across landscapes and through cities, for example.

Now, researchers from the University of Copenhagen’s Department of Computer Science have succeeded in solving the single source shortest problem, a riddle that has stumped researchers and experts for decades.

Artificial Intelligence (AI) is rapidly changing the world. Emerging technologies on a daily basis in AI capabilities have lead to a number of innovations including autonomous vehicles, self-driving flights, robotics, etc. Some of the AI technologies feature predictions on future and accurate decision-making. AI is the best friend to technology leaders who want to make the world a better place with unfolding inventions.

Whether humans agree or not, AI developments are slowly impacting all aspects of the society including the economy. However, some technologies might even bring challenges and risks to the working environment. To keep a track on AI development, good leaders head the AI world to ensure trust, reliability, safety and accuracy.

Intelligent behaviour has long been considered a uniquely human attribute. But when computer science and IT networks started evolving, artificial intelligence and people who stood by them were on the spotlight. AI in today’s world is both developing and under control. Without a transformation here, AI will never fully deliver the problems and dilemmas of business only with data and algorithms. Wise leaders do not only create and capture vital economic values, rather build a more sustainable and legitimate organisation. Leaders in AI sectors have eyes to see AI decisions and ears to hear employees perspective.

“You can think of curiosity as a kind of reward which the agent generates internally on its own, so that it can go explore more about its world,” Agrawal said. This internally generated reward signal is known in cognitive psychology as “intrinsic motivation.” The feeling you may have vicariously experienced while reading the game-play description above — an urge to reveal more of whatever’s waiting just out of sight, or just beyond your reach, just to see what happens — that’s intrinsic motivation.

Humans also respond to extrinsic motivations, which originate in the environment. Examples of these include everything from the salary you receive at work to a demand delivered at gunpoint. Computer scientists apply a similar approach called reinforcement learning to train their algorithms: The software gets “points” when it performs a desired task, while penalties follow unwanted behavior.