Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

UMass Engineers Create First Artificial Neurons That Could Directly Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

First Artificial Neurons That Might Communicate With Living Cells

A team of engineers at the University of Massachusetts Amherst has announced the creation of an artificial neuron with electrical functions that closely mirror those of biological ones. Building on their previous groundbreaking work using protein nanowires synthesized from electricity-generating bacteria, the team’s discovery means that we could see immensely efficient computers built on biological principles which could interface directly with living cells.

“Our brain processes an enormous amount of data,” says Shuai Fu, a graduate student in electrical and computer engineering at UMass Amherst and lead author of the study published in Nature Communications. “But its power usage is very, very low, especially compared to the amount of electricity it takes to run a Large Language Model, like ChatGPT.”

The human body is over 100 times more electrically efficient than a computer’s electrical circuit. The human brain is composed of billions of neurons, specialized cells that send and receive electrical impulses all over the body. While it takes only about 20 watts for your brain to, say, write a story, a LLM might consume well over a megawatt of electricity to do the same task.

What if the Universe Remembers Everything? New Theory Rewrites the Rules of Physics

For over a hundred years, physics has rested on two foundational theories. Einstein’s general relativity describes gravity as the curvature of space and time, while quantum mechanics governs the behavior of particles and fields.

Each theory is highly successful within its own domain, yet combining them leads to contradictions, particularly in relation to black holes, dark matter, dark energy, and the origins of the universe.

My colleagues and I have been exploring a new way to bridge that divide. The idea is to treat information – not matter, not energy, not even spacetime itself – as the most fundamental ingredient of reality. We call this framework the quantum memory matrix (QMM).

Minimally invasive implantation of scalable high-density cortical microelectrode arrays for multimodal neural decoding and stimulation

To elicit VEPs, the eyelid corresponding to the stimulated retina was retracted temporarily while periodic 50 ms flashes were generated at 1 Hz from an array of white light-emitting diodes (LEDs). Neural response waveforms were temporally aligned to the stimulus onset. VEPs were calculated as the time-aligned averaged signals over 150 trials.

Electrical stimulation at the cortical surface was applied at one of the 200 µm electrodes, controlled by the Intan Technologies RHS controller and RHX software. Charge-balanced, biphasic, cathodic-first, 200 µs pulses of 100 µA peak current were delivered at 0.25 Hz. The evoked potentials were recorded over a series of trials. During analysis, for each trial and electrode, the Hjorth ‘activity’ of each trial was computed as the variance of the signal from 200 ms to 2,000 ms post-stimulation, and the average activity was taken over 40 trials.

A 1,024-channel array was placed over the sensorimotor cortex on each hemisphere following carefully sized bilateral craniectomies. Two Intan 1,024-channel RHD controllers were used to record from both arrays simultaneously.

Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips

Researchers from EPFL, AMD, and the University of Novi Sad have uncovered a long-standing inefficiency in the algorithm that programs millions of reconfigurable chips used worldwide, a discovery that could reshape how future generations of these are designed and programmed.

Many industries, including telecoms, automotive, aerospace and rely on a special breed of chip called the Field-Programmable Gate Array (FPGA). Unlike traditional chips, FPGAs can be reconfigured almost endlessly, making them invaluable in fast-moving fields where designing a custom chip would take years and cost a fortune. But this flexibility comes with a catch: FPGA efficiency depends heavily on the software used to program them.

Since the late 1990s, an algorithm known as PathFinder has been the backbone of FPGA routing. Its job: connecting thousands of tiny circuit components without creating overlaps.

/* */