Toggle light / dark theme

Dr. David Markowitz, PhD — IARPA — High-Risk, High-Payoff Research For National Security Challenges

High-Risk, High-Payoff Bio-Research For National Security Challenges — Dr. David A. Markowitz, Ph.D., IARPA


Dr. David A. Markowitz, Ph.D. (https://www.markowitz.bio/) is a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA — https://www.iarpa.gov/) which is an organization that invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the U.S. Intelligence Community (IC).

IARPA’s mission is to push the boundaries of science to develop solutions that empower the U.S. IC to do its work better and more efficiently for national security. IARPA does not have an operational mission and does not deploy technologies directly to the field, but instead, they facilitate the transition of research results to IC customers for operational application.

Currently, Dr. Markowitz leads three research programs at the intersection between biology, engineering, and computing. These programs are: FELIX, which is revolutionizing the field of bio-surveillance with new experimental and computational tools for detecting genetic engineering in complex biological samples; MIST, which is developing compact and inexpensive DNA data storage devices to address rapidly growing enterprise storage needs; and MICrONS, which is guiding the development of next-generation machine learning algorithms by reverse-engineering the computations performed by mammalian neocortex.

Previously, as a researcher in neuroscience, Dr. Markowitz published first-author papers on neural computation, the neural circuit basis of cognition in primates, and neural decoding strategies for brain-machine interfaces.

Researchers at MIT Solve a Differential Equation Behind the Interaction of Two Neurons Through Synapses to Unlock a New Type of Speedy and Efficient Artificial Intelligence AI Algorithm

Continuous-time neural networks are one subset of machine learning systems capable of taking on representation learning for spatiotemporal decision-making tasks. Continuous differential equations are frequently used to depict these models (DEs). Numerical DE solvers, however, limit their expressive potential when used on computers. The scaling and understanding of many natural physical processes, like the dynamics of neural systems, have been severely hampered by this restriction.

Inspired by the brains of microscopic creatures, MIT researchers have developed “liquid” neural networks, a fluid, robust ML model that can learn and adapt to changing situations. These methods can be used in safety-critical tasks such as driving and flying.

However, as the number of neurons and synapses in the model grows, the underlying mathematics becomes more difficult to solve, and the processing cost of the model rises.

Why This Breakthrough AI Now Runs A Nuclear Fusion Reactor | New AI Supercomputer

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
Nuclear fusion researchers have created a machine learning AI algorithm to detect and track the existence of plasma blobs that build up inside the tokamak for prediction of plasma disruption, the diagnosis of plasma using spectroscopy and tomography, and the tracking of turbulence inside of the fusion reactor. New AI supercomputer with over 13.5 million processor cores and over 1 exaflop of compute power made be Cerebras. A new study reveals an innovative neuro-computational model of the human brain which could lead to the creation of conscious AI or artificial general intelligence (AGI).

AI News Timestamps:
0:00 Breakthrough AI Runs A Nuclear Fusion Reactor.
3:07 New AI Supercomputer.
6:19 New Brain Model For Conscious AI

#ai #ml #nuclear

Using game theory mathematics to resolve human conflicts

This could help achieve even world peace ✌️ called equilibrium theory by John Nash.


Game theory mathematics is used to predict outcomes in conflict situations. Now it is being adapted through big data to resolve highly contentious issues between people and the environment.

Game theory is a mathematical concept that aims to predict outcomes and solutions to an issue in which parties with conflicting, overlapping or mixed interests interact.

In “theory,” the “game” will bring everyone towards an optimal solution or “equilibrium.” It promises a scientific approach to understanding how people make decisions and reach compromises in real-world situations.

Solving brain dynamics gives rise to flexible machine-learning models

Its why we should reverse engineer lab rat brains, crow brains, pigs, and chimps, ending on fully reverse engineering the human brain. even if its a hassle. i still think could all be done by end of 2025.


Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many , becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets—flexible, causal, robust, and explainable—but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training—while many traditional models are fixed.

MIT reveals a new type of faster AI algorithm for solving a complex equation

Researchers solved a differential equation behind the interaction of two neurons through synapses, creating a faster AI algorithm.

Artificial intelligence uses a technique called artificial neural networks (ANN) to mimic the way a human brain works. A neural network uses input from datasets to “learn” and output its prediction based on the given information.

Full Story:


Imaginima/iStock.

Recently, researchers from the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab (MIT CSAIL), have discovered a quicker way to solve an equation used in the algorithms for ‘liquid’ neural neurons.

MIT solved a century-old differential equation to break ‘liquid’ AI’s computational bottleneck

Last year, MIT developed an AI/ML algorithm capable of learning and adapting to new information while on the job, not just during its initial training phase. These “liquid” neural networks (in the Bruce Lee sense) literally play 4D chess — their models requiring time-series data to operate — which makes them ideal for use in time-sensitive tasks like pacemaker monitoring, weather forecasting, investment forecasting, or autonomous vehicle navigation. But, the problem is that data throughput has become a bottleneck, and scaling these systems has become prohibitively expensive, computationally speaking.

On Tuesday, MIT researchers announced that they have devised a solution to that restriction, not by widening the data pipeline but by solving a differential equation that has stumped mathematicians since 1907. Specifically, the team solved, “the differential equation behind the interaction of two neurons through synapses… to unlock a new type of fast and efficient artificial intelligence algorithms.”

“The new machine learning models we call ‘CfC’s’ [closed-form Continuous-time] replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” MIT professor and CSAIL Director Daniela Rus said in a Tuesday press statement. “CfC models are causal, compact, explainable, and efficient to train and predict. They open the way to trustworthy machine learning for safety-critical applications.”

Closed-form continuous-time neural networks

Physical dynamical processes can be modelled with differential equations that may be solved with numerical approaches, but this is computationally costly as the processes grow in complexity. In a new approach, dynamical processes are modelled with closed-form continuous-depth artificial neural networks. Improved efficiency in training and inference is demonstrated on various sequence modelling tasks including human action recognition and steering in autonomous driving.

Computer scientists succeed in solving algorithmic riddle from the 1950s

For more than half a century, researchers around the world have been struggling with an algorithmic problem known as “the single source shortest path problem.” The problem is essentially about how to devise a mathematical recipe that best finds the shortest route between a node and all other nodes in a network, where there may be connections with negative weights.

Sound complicated? Possibly. But in fact, this type of calculation is already used in a wide range of the apps and technologies that we depend upon for finding our ways around—as Google Maps guides us across landscapes and through cities, for example.

Now, researchers from the University of Copenhagen’s Department of Computer Science have succeeded in solving the single source shortest problem, a riddle that has stumped researchers and experts for decades.

/* */