Toggle light / dark theme

The ambition is to supercharge human capabilities, treat neurological disorders like ALS or Parkinson’s, and may be one day achieve a symbiotic relationship between humans and artificial intelligence.

“The first human received an implant from Neuralink yesterday and is recovering well,” Musk said in a post on X, formerly Twitter.

“Initial results show promising neuron spike detection,” he added.

Amid a massive wave of tech company layoffs in favor of AI, Google is firing thousands of contractors tasked with making its namesake search engine work better.

As Vice reports, news of the company ending its contract with Appen — a data training firm that employs thousands of poorly paid gig workers in developing countries to maintain, among other things, Google’s search algorithm — coincidentally comes a week after a new study found that the quality of its search engine’s results has indeed gotten much worse in recent years.

Back in late 2022, journalist Cory Doctorow coined the term “enshittification” to refer to the demonstrable worsening of all manner of online tools, which he said was by design as tech giants seek to extract more and more money out of their user bases. Google Search was chief among the writer’s examples of the enshittification effect in a Wired article published last January, and as the new study out of Germany found, that effect can be measured.

AI processing can take a huge amount of computing power, but by the looks of this latest joint project from the Jülich Supercomputing Center and French computing provider Eviden, power will not be in short supply.


“But can it run Crysis” is an old gag, but I’m still going to see if I get away with it.

Feng Guo, an associate professor of intelligent systems engineering at the Indiana University Luddy School of Informatics, Computing and Engineering, is addressing the technical limitations of artificial intelligence computing hardware by developing a new hybrid computing system—which has been…


A team of IU bioengineers are building the intersection of brain organoids and artificial intelligence, which could potentially transform the performance and efficiency of advanced AI techniques.

If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:

“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”

“A system that integrates brain cells into a hybrid machine can recognize voices”

China Telecom claims it has built the country’s first supercomputer constructed entirely with Chinese-made components and technology (via ITHome). Based in Wuhan, the Central Intelligent Computing Center supercomputer is reportedly built for AI and can train large language models (LLM) with trillions of parameters. Although China has built supercomputers with domestic hardware and software before, going entirely domestic is a new milestone for the country’s tech industry.

Exact details on the Central Intelligent Computing Center are scarce. What’s clear so far: The supercomputer is purportedly made with only Chinese parts; it can train AI models with trillions of parameters; and it uses liquid cooling. It’s unclear exactly how much performance the supercomputer has. A five-exaflop figure is mentioned in ITHome’s report, but to our eyes it seems that the publication was talking about the total computational power of China Telecom’s supercomputers, and not just this one.

“It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one,” Pintar said.

However, the system isn’t without flaws. The data it is fed does not account for the potential effects of rising temperatures, and the simulated storms produced for areas with less data were not as plausible.

“Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” NIST Fellow Emil Simiu said.

Although a significant number of neuromorphic devices applied to RC have been reported in recent years, the majority of these efforts have focused on shallow-RC with monotonic reservoir state spaces19. This can be attributed to the heavy reliance on monotonic carrier dynamics when using reported neuromorphic devices as reservoirs to map sequence signals, which gives rise to several noteworthy issues for RC when performing different spatiotemporal tasks. One major issue is that the narrow range ratio of spatial characteristics makes it difficult to extract the diversity spatial feature of sequence signal, which greatly limits the richness of the reservoir space state. As a result, during the process of mapping complex sequence signals, the reservoir state tends to overlap, making it difficult to effectively separate the spatial characteristics within complex information and subsequently reducing recognition accuracy. Another issue is the limited rang ratio of temporal characteristic, which hinders efficient extraction of temporal feature from sequential signals with diverse time-scales. For example, when performing dynamic trajectory prediction with abundant time-scales, the limited range ratio of temporal characteristic is difficult to adapt to the signal with different temporal feature, which severely limit the correlation of prediction. Despite researchers have achieved multi-scale temporal characteristics by increasing the number of signal modes in the input layer based on shallow-RC networks20, as shown in the Supplement Information Fig. S1, the limitation of shallow-RC on spatial characteristics remain unresolved. Furthermore, increasing the input layer also means the requirement of more encoding design for sequence signals and the utilization of more physical devices to receive different modes of physical signals. This significantly increases the signal error rate and pre-processing cost of the input signals, which is detrimental to the robustness of RC. Therefore, developing new neuromorphic reservoir devices along with new RC networks to simultaneously meet large-scale spatial and temporal characteristics are highly required, which is crucial for achieving high-performance recognition and prediction in complex spatiotemporal tasks for RC networks.

Interestingly, primates in nature are able to quickly and accurately recognize complex object information, such as facial recognition, with the help of advanced synaptic dynamics mechanisms. Brain science research on primates has confirmed20,21,22 that primates use a distributed memory characteristic for processing complex information. When the nervous system processes a task, each neuron and neural circuit processes only a part of the information and generates a part of the output. For example, as shown in Fig. 1a, when a primate observes an unfamiliar face, neurons in the temporal polar (TP) region (blue) respond to familiar eye features, forming TP feature memory. Neuron cells in the anterior-medial (AM) region respond to unfamiliar lip features, forming AM feature memory23. In this way, all outputs are integrated by the cerebral cortex to form the final output result, significantly improving the computational efficiency and accuracy for complex information processing. The physiological significance of distributed memory characteristics in primates serves as inspiration for the design of physical node devices with distributed reservoir states in the reservoir layer of the RC system. These devices are intended to facilitate the distributed mapping of spatiotemporal signals. However, to date, no such devices have been demonstrated.

In this work, inspired by the distributed memory characteristic of primates, an ultra-short channel organic neuromorphic vertical field effect transistor with distributed reservoir states is proposed and used to implement grouped-RC networks. By coupling multivariate physical mechanisms into a single device, the dynamic states of carriers are greatly enriched. As reservoir nodes, sequential signals can be mapped to a distributed reservoir state space by various carrier dynamics, rather than by monotonic carrier dynamics. Additionally, a vertical architecture with ultra-short nanometers transport distance is adopted to eliminate the driving force of the dissociation exciton, thereby improving the feedback strength of the device and the reducing the overlap between different reservoir state space, which only cause negligible additional power. Consequently, the device serves as a reservoir capable of mapping sequential signals into distributed reservoir state space with 1,152 reservoir states, and the range ratio of temporal (key parameters for prediction) and spatial characteristics (key parameters for recognition) can simultaneously reach 2,640 and 650, respectively, which are superior to the reported neuromorphic devices. Therefore, the grouped-RC network implemented based on the device can simultaneously meet the requirements of two different spatiotemporal types task (broad-spectrum image recognition and dynamic trajectory prediction) and exhibits over 94% recognition accuracy and over 95% prediction correlation, respectively. This work proposes a strategy for developing neural hardware for complex reservoir computing networks and has great potential in the development of a new generation of artificial neuromorphic hardware and brain-like computing.