Page 30

Apr 20, 2024

Intel’s Hala Point, the world’s largest neuromorphic computer, has 1.15 billion neurons

Posted by in categories: computing, physics

Three years after introducing its second-generation “neuromorphic” computer chip, Intel on Wednesday announced the company has assembled 1,152 of the parts into a single, parallel-processing system called Hala Point, in partnership with the US Department of Energy’s Sandia National Laboratories.

The Hala Point system’s 1,152 Loihi 2 chips enable a total of 1.15 billion artificial neurons, Intel said, “and 128 billion synapses distributed over 140,544 neuromorphic processing cores.” That is an increase from the previous Intel multi-chip Loihi system, debuted in 2020, called Pohoiki Springs, which used just 768 Loihi 1 chips.

Sandia Labs intends to use the system for what it calls “brain-scale computing research,” to solve problems in areas of device physics, computer architecture, computer science, and informatics.

Apr 20, 2024

Making AI more energy efficient with neuromorphic computing

Posted by in categories: biological, information science, mobile phones, robotics/AI

CWI senior researcher Sander Bohté started working on neuromorphic computing already in 1998 as a PhD-student, when the subject was barely on the map. In recent years, Bohté and his CWI-colleagues have realized a number of algorithmic breakthroughs in spiking neural networks (SNNs) that make neuromorphic computing finally practical: in theory many AI-applications can become a factor of a hundred to a thousand more energy-efficient. This means that it will be possible to put much more AI into chips, allowing applications to run on a smartwatch or a smartphone. Examples are speech recognition, gesture recognition and the classification of electrocardiograms (ECG).

“I am really grateful that CWI, and former group leader Han La Poutré in particular, gave me the opportunity to follow my interest, even though at the end of the 1990s neural networks and neuromorphic computing were quite unpopular”, says Bohté. “It was high-risk work for the long haul that is now bearing fruit.”

Spiking neural networks (SNNs) more closely resemble the biology of the brain. They process pulses instead of the continuous signals in classical neural networks. Unfortunately, that also makes them mathematically much more difficult to handle. For many years SNNs were therefore very limited in the number of neurons they could handle. But thanks to clever algorithmic solutions Bohté and his colleagues have managed to scale up the number of trainable spiking neurons first to thousands in 2021, and then to tens of millions in 2023.

Apr 20, 2024

Could JWST Solve One of Cosmology’s Greatest Mysteries?

Posted by in category: cosmology

The telescope’s studies could help end a long-standing disagreement over the rate of cosmic expansion. But scientists say more measurements are needed.

By Davide Castelvecchi & Nature magazine.

Apr 20, 2024

NASA Veteran’s Propellantless Propulsion Drive That Physics Says Shouldn’t Work Just Produced Enough Thrust to Overcome Earth’s Gravity

Posted by in category: physics

A veteran NASA scientist says his company has tested a propellantless propulsion drive technology that produced one Earth gravity of thrust.

Apr 19, 2024

Bostrom’s Deep Utopia

Posted by in categories: futurism, robotics/AI

Robin Hanson comments on Nick Bostrom’s new tome … has a great cover with a number of interesting questions and a subtitle that hints that it might address the meaning of life in a future where AI and robots can do everything. But alas, after much build up and anticipation, he leaves that question unanswered, with an abrupt oops, out of time on page 427. … He tries to address meaty topics like, what keeps life interesting? What is our purpose and meaning when the struggle is gone? Can fulfillment get full? But in each case, the pedagogy is more of a survey of all possible answers versus the much more difficult task of making specific predictions. (More)

Apr 19, 2024

Selective language modeling: New method allows for better models with less data

Posted by in categories: mathematics, transportation

👉 Researchers have developed a method called Selective Language Modeling (SLM), which trains language models more efficiently by focusing on the most relevant tokens.

Researchers introduce a new method called “Selective Language Modeling” that trains language models more efficiently by focusing on the most relevant tokens.

The method leads to significant performance improvements in mathematical tasks, according to a new paper from researchers at Microsoft, Xiamen University, and Tsinghua University. Instead of considering all tokens in a text corpus equally during training as before, Selective Language Modeling (SLM) focuses specifically on the most relevant tokens.

Continue reading “Selective language modeling: New method allows for better models with less data” »

Apr 19, 2024

NASA’s Juno probe captures amazing views of Jupiter’s volcanic moon Io (video)

Posted by in category: space

Io is simply littered with volcanoes, and we caught a few of them in action.

Apr 19, 2024

Intel Announces DDR5-8800 MT/s for Upcoming ‘Granite Rapids’ Server Chips

Posted by in category: computing

The move by Intel is a substantial increase in memory speeds from prior generations.

Apr 19, 2024

Machine at Intel’s Hillsboro campus can produce chips so advanced, they don’t yet exist

Posted by in categories: computing, innovation

Engineers and developers at Intel are always working to push the boundaries of what’s possible, leaning on Moore’s Law — the idea that the number of transistors on a single chip will double every two years with a minimal increase in cost.

But over the last five years, Intel has had its ups and downs, demonstrated by the wavering value of its stock. It went from a high of $68 per share to more recently trading at $36 per share.

Continue reading “Machine at Intel’s Hillsboro campus can produce chips so advanced, they don’t yet exist” »

Apr 19, 2024

22,500 Palo Alto firewalls “possibly vulnerable” to ongoing attacks

Posted by in category: security

Approximately 22,500 exposed Palo Alto GlobalProtect firewall devices are likely vulnerable to the CVE-2024–3400 flaw, a critical command injection vulnerability that has been actively exploited in attacks since at least March 26, 2024.

CVE-2024–3400 is a critical vulnerability impacting specific Palo Alto Networks’ PAN-OS versions in the GlobalProtect feature that allows unauthenticated attackers to execute commands with root privileges using command injection triggered by arbitrary file creation.

The flaw was disclosed by Palo Alto Networks on April 12, with the security advisory urging system administrators to apply provided mitigations immediately until a patch was made available.

Page 30 of 11,040First2728293031323334Last