Toggle light / dark theme

The phrase “positive reinforcement,” is something you hear more often in an article about child rearing than one about artificial intelligence. But according to Alice Parker, Dean’s Professor of Electrical Engineering in the Ming Hsieh Department of Electrical and Computer Engineering, a little positive reinforcement is just what our AI machines need. Parker has been building electronic circuits for over a decade to reverse-engineer the human brain to better understand how it works and ultimately build artificial systems that mimic it. Her most recent paper, co-authored with Ph.D. student Kun Yue and colleagues from UC Riverside, was just published in the journal Science Advances and takes an important step towards that ultimate goal.

The AI we rely on and read about today is modeled on traditional computers; it sees the world through the lens of binary zeros and ones. This is fine for making complex calculations but, according to Parker and Yue, we’re quickly approaching the limits of the size and complexity of problems we can solve with the platforms our AI exists on. “Since the initial deep learning revolution, the goals and progress of deep-learning based AI as we know it has been very slow,” Yue says. To reach its full potential, AI can’t simply think better—it must react and learn on its own to events in . And for that to happen, a massive shift in how we build AI in the first place must be conceived.

To address this problem, Parker and her colleagues are looking to the most accomplished learning system nature has ever created: the . This is where comes into play. Brains, unlike computers, are analog learners and biological memory has persistence. Analog signals can have multiple states (much like humans). While a binary AI built with similar types of nanotechnologies to achieve long-lasting memory might be able to understand something as good or bad, an analog brain can understand more deeply that a situation might be “very good,” “just okay,” “bad” or “very bad.” This field is called and it may just represent the future of artificial intelligence.

A team of chemists built the first artificial assembler, which uses light as the energy source. These molecular machines are performing synthesis in a similar way as biological nanomachines. Advantages are fewer side products, enantioselectivity, and shorter synthetic pathways since the mechanosynthesis forces the molecules into a predefined reaction channel.

Chemists usually synthesize molecules using stochastic bond-forming collisions of the reactant molecules in solution. Nature follows a different strategy in biochemical synthesis. The majority of biochemical reactions are driven by machine-type protein complexes that bind and position the reactive molecules for selective transformations. Artificial “molecular assemblers” performing “mechanosynthesis” have been proposed as a new paradigm in chemistry and nanofabrication. A team of chemists at Kiel University (Germany) built the first artificial assembler, that performs synthesis and uses light as the energy source. The system combines selective binding of the reactants, accurate positioning, and active release of the product. The scientists published their findings in the journal Communications Chemistry.

The idea of molecular assemblers, that are able to build molecules, has already been proposed in 1986 by K. Eric Drexler, based on ideas of Richard Feynman, Nobel Laureate in Physics. In his book “Engines of Creation: The Coming Era of Nanotechnology” and follow-up publications Drexler proposes molecular machines capable of positioning reactive molecules with atomic precision and to build larger, more sophisticated structures via mechanosynthesis. If such a molecular nanobot could build any molecule, it could certainly build another copy of itself, i.e. it could self-replicate. These imaginative visions inspired a number of science fiction authors, but also started an intensive scientific controversy.

A neuromorphic computer that can simulate 8 million neurons is in the news. The term “neuromorphic” suggests a design that can mimic the human brain. And neuromorphic computing? It is described as using very large scale integration systems with electric analog circuits imitating neuro-biological architectures in our system.

This is where Intel steps in, and significantly so. The Loihi chip applies the principles found in biological brains to computer architectures. The payoff for users is that they can process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications, e.g., sparse coding, graph search and constraint-satisfaction problems.

Its news release on Monday read “Intel’s Pohoiki Beach, a 64-Chip Neuromorphic System, Delivers Breakthrough Results in Research Tests.” Pohoiki Beach is Intel’s latest neuromorphic system.

It appears that the physics of information holds the key to the solution of the Fermi Paradox — indications are that we most likely live in a “Syntellect Chrysalis” (or our “second womb”) instead of a “cosmic jungle.”

Within the next few decades, we’ll transcend our biology by leaving today’s organic Chrysalis behind, by leaving our second womb, by leaving our cradle, if speaking in tropes.

This particular version of “human universe” is what we “see” from within our dimensional cocoon, it’s a construct of our minds but by no means represents objective reality “out there” including our most advanced models such as M-theory that are only approximations at best.

Peptides, one of the fundamental building blocks of life, can be formed from the primitive precursors of amino acids under conditions similar to those expected on the primordial Earth, finds a new UCL study.

The findings, published in Nature, could be a missing piece of the puzzle of how life first formed.

“Peptides, which are chains of amino acids, are an absolutely essential element of all life on Earth. They form the fabric of proteins, which serve as catalysts for biological processes, but they themselves require enzymes to control their formation from amino acids,” explained the study’s lead author, Dr Matthew Powner (UCL Chemistry).