Toggle light / dark theme

Both quantum computing and machine learning have been touted as the next big computer revolution for a fair while now.

However, experts have pointed out that these techniques aren’t generalized tools – they will only be the great leap forward in computer power for very specialized algorithms, and even more rarely will they be able to work on the same problem.

One such example of where they might work together is modeling the answer to one of the thorniest problems in physics: How does General Relativity relate to the Standard Model?

“We were really surprised by this result because our motivation was to find an indirect route to improve performance, and we thought trust would be that—with real faces eliciting that more trustworthy feeling,” Nightingale says.

Farid noted that in order to create more controlled experiments, he and Nightingale had worked to make provenance the only substantial difference between the real and fake faces. For every synthetic image, they used a mathematical model to find a similar one, in terms of expression and ethnicity, from databases of real faces. For every synthetic photo of a young Black woman, for example, there was a real counterpart.

A mini brain with trillions of petaflops in your pant pocket? Sounds Good!

“This is what we’re announcing today,” said Knowles. “A machine that in fact will exceed the parametric capacity of the human brain.”

That next-gen IPU, he said, would realize the vision of 1960s compute scientist Jack Good, a colleague of Alan Turing’s who conceived of an “intelligence explosion.”

Those synapses are “very similar to the parameters that are learned by an artificial neural network.” Today’s neural nets have gotten close to a trillion, he noted, “so we clearly have another two or more orders of magnitude to go before we have managed to build an artificial neural network that has similar parametric capacity to a human brain.

The company said it is working on a computer design, called The Good Computer, which will be capable of handling neural network models that employ 500 trillion parameters, making possible what it terms super-human ultra-intelligence.

It’s easy to see why: as shockingly powerful mini-processors, neurons and their connections—together dubbed the connectome—hold the secret to highly efficient and flexible computation. Nestled inside the brain’s wiring diagrams are the keys to consciousness, memories, and emotion. To connectomics, mapping the brain isn’t just an academic exercise to better understand ourselves—it could lead to more efficient AI that thinks like us.

But often ignored are the brain’s supporting characters: astrocytes—brain cells shaped like stars—and microglia, specialized immune cells. Previously considered “wallflowers,” these cells nurture neurons and fine-tune their connections, ultimately shaping the connectome. Without this long-forgotten half, the brain wouldn’t be the computing wizard we strive to imitate with machines.

In a stunning new brain map published in Cell, these cells are finally having their time in the spotlight. Led by Dr. H. Sebastian Seung at Princeton University, the original prophet of the connectome, the map captures a tiny chunk of the mouse’s visual cortex, less than 1,000 times smaller than a pea. Yet jam-packed inside the map aren’t just neurons; in a technical tour de force, the team mapped all brain cells, their connections, blood vessels, and even the compartments inside cells that house DNA and produce energy.

It should come as little surprise that pioneering work in biological robotics is as controversial as it is exciting. Take for example the article published in December 2021 in the Proceedings of the National Academy of Sciences by Sam Kreigman and Douglas Blackiston at Tufts University and colleagues. This article, entitled “Kinematic self-replication in reconfigurable organisms,” is the third installment of the authors’ “xenobots” series.

“There are a lot of partnerships where you’re carpooling to go to the nightclub, and then there are long-term collaborations,” said Saubestre. “We are fiancés for the time being, but we are trying to make this couple work.”

“There’s a lot that can be optimized, and there’s a lot of potential, and that was the reason to go into this partnership with someone who is as keen as we are to make it happen,” said Saubestre. “We’re hoping to retain this competitive advantage that we’ve had over the years and inventing the future machines to run our simulations.”

Added Saubestre, some vendors with whom TotalEnergies works focus on selling cloud services, but such services aren’t necessarily ideal for the kinds of work the energy giant needs to get done.

In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high-resolution.

The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque grayish color that prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera, which records colorful images, illuminated by a ring of LEDs.

When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are, and indicate the force direction. The model infers what scientists call a force map: It provides a force vector for every point in the three-dimensional fingertip.