Toggle light / dark theme

The first wireless commands to a computer have been demonstrated in a breakthrough for people with paralysis.

The system is able to transmit brain signals at “single-neuron resolution and in full broadband fidelity”, say researchers at Brown University in the US.

A clinical trial of the BrainGate technology involved a small transmitter that connects to a person’s brain motor cortex.

Using measurements of resistance versus applied gate voltage at temperatures of 390 mK, the researchers showed that superconductivity in the improved NbN layer could survive applied magnetic fields as high as 17.8 Tesla. Meanwhile, the improved GaN semiconductor was of high enough quality to exhibit the quantum Hall effect at lower applied magnetic fields of 15 T. “Both these improvements mean the quantum Hall effect and superconductivity can occur at the same time in the heterostructure over a certain ‘window’ of temperatures and magnetic fields (that is, below 1 K and between magnetic fields of 15 to 17.8 T),” study lead author Phillip Dang tells Physics World.

According to the team, the new GaN/NbN heterostructure could be used in quantum computing and low-temperature electronics. Reporting their work in Science Advances, the researchers say they now plan to further investigate the interaction between superconductivity and the quantum Hall effect in this material.

Intel Corp Chief Executive Officer Pat Gelsinger will virtually attend a meeting being put together by President Joe Biden’s administration for April 12 to discuss the semiconductor supply chain issues disrupting U.S. automotive factories, according to a person familiar with the matter. Reuters previously reported the meeting will include Biden’s national security adviser, Jake Sullivan, and a top economic aide, Brian Deese, as well as chipmakers and automakers. Gelsinger last month said Intel will spend $20 billion to build two new chip factories in Arizona.

Exactly one week after new Chief Executive Pat Gelsinger unveiled plans to reinvent Intel Corp., Arm Ltd. announced version 9 of its architecture and put forth its vision for the next decade. We believe Arm’s direction is strong and compelling as it combines an end-to-end capability, from edge to cloud to the data center to the home and everything in between.

Moreover, it doubles down on Arm’s model of enabling ecosystem partners to add significant value while at the same time maintaining software compatibility with previous generations. We see this as extremely important because the variety of use cases requiring specialized silicon is rapidly expanding in the marketplace, and the Arm architecture is by far in our view the best-positioned to capitalize on this coming wave.

In this Breaking Analysis, we’ll explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We’ll also share with you some feedback we received from theCUBE community on last week’s episode and a little inside baseball on how Intel, IBM Corp., Samsung Electronics Co. Ltd., Taiwan Semiconductor Manufacturing Co. Ltd. and the U.S. government might be thinking about the shifting landscape of semiconductor technology.

The European Organization for Nuclear Research (CERN) involves 23 countries, 15000 researchers, billions of dollars a year, and the biggest machine in the world: the Large Hadron Collider. Even with so much organizational and mechanical firepower behind it, though, CERN and the LHC are outgrowing their current computing infrastructure, demanding big shifts in how the world’s biggest physics experiment collects, stores and analyzes its data. At the 2021 EuroHPC Summit Week, Maria Girone, CTO of the CERN openlab, discussed how those shifts will be made.

The answer, of course: HPC.

The Large Hadron Collider – a massive particle accelerator – is capable of collecting data 40 million times per second from each of its 150 million sensors, adding up to a total possible data load of around a petabyte per second. This data describes whether a detector was hit by a particle, and if so, what kind and when.

A new, detailed model of the surface of the SARS-CoV-2 spike protein reveals previously unknown vulnerabilities that could inform development of vaccines. Mateusz Sikora of the Max Planck Institute of Biophysics in Frankfurt, Germany, and colleagues present these findings in the open-access journal PLOS Computational Biology.

SARS-CoV-2 is the virus responsible for the COVID-19 pandemic. A key feature of SARS-CoV-2 is its spike , which extends from its and enables it to target and infect human cells. Extensive research has resulted in detailed static models of the spike protein, but these models do not capture the flexibility of the spike protein itself nor the movements of protective glycans—chains of sugar molecules—that coat it.

To support vaccine development, Sikora and colleagues aimed to identify novel potential target sites on the surface of the spike protein. To do so, they developed that capture the complete structure of the spike protein and its motions in a realistic environment.

In 2020, TSMC spent a record $18 billion on building new factories for their chips. TSMC just announced they are spending $100 billion on new factories over the next 3 years. This will radically change the chip landscape. Many other companies, including Samsung and Intel, are upping their spending as well.

Of course, at some point there will be a chip glut again but this greatly increased chip capacity will change the world that we live in. It will also make AGI (Artificial General Intelligence) that much closer to reality… (All this money gives companies an incentive to spend R&D on smaller transistors, etc.)

The Big Bang remains the best way to explain what happened at the beginning of the Universe. However, the incredible energies flowing during the early part of the bang are almost incomprehensive to our everyday experience. Luckily, computers aren’t so attached to normal human ways of thinking and have long been used to model the early universe right after the Bang. Now, a team from the University of Göttingen have created the most comprehensive model of what exactly happened in that very early stage of the universe – one trillionth of a second after the Big Bang.

Just because a computer can model it doesn’t really mean it is easy to explain, however. The model includes clumps of energy weighing grams, but which are one millionth the size of a single proton. These energy structures defined what would eventually become the structure of the universe today, with tiny variations in the original structure resulting in entire galaxies or complete voids, depending on the presence or absence of matter.

Throwing this much computing power at a physical space one millionth the size of a proton was no mean feat. “It is probably the largest simulation of the smallest area of the Universe that has been carried out thus far” says Professor Jens Niemeyer, who leads the group carrying out the research.