Toggle light / dark theme

The world’s biggest AI chip just doubled its specs—without adding an inch.

The Cerebras Systems Wafer Scale Engine is about the size of a big dinner plate. All that surface area enables a lot more of everything, from processors to memory. The first WSE chip, released in 2019, had an incredible 1.2 trillion transistors and 400000 processing cores. Its successor doubles everything, except its physical size.

The WSE-2 crams in 2.6 trillion transistors and 850000 cores on the same dinner plate. Its on-chip memory has increased from 18 gigabytes to 40 gigabytes, and the rate it shuttles information to and from said memory has gone from 9 petabytes per second to 20 petabytes per second.

DDR5 memory production is finally picking up speed as several manufacturers have finalized their mainstream designs for the next-generation standard. The DDR5 memory standard will be utilized by upcoming Intel (Alder Lake) & AMD (Raphael) platforms which are expected to launch later this year.

Jiahe Jinwei has announced that it has received the first batch of DDR5 memory modules from its assembly line based in the Shenzhen Pingshan factory. The memory modules are now being mass-produced and are expected to launch later this year with the next-generation platforms from Intel and AMD. Intel is said to take a lead in offering the next-gen memory support first on its next-gen Alder Lake platform comprising of the Z690 chipset-based motherboards as reported here.

Lockheed Martin Space hired 2700 people plus 700 interns in 2020, a year unlike any other for human resources managers. Almost overnight, the prime contractor with about 23000 employees switched from its traditional in-person approach to virtual recruitment, interviewing and training.

SpaceNews correspondent Debra Werner spoke with Lockheed Martin Space executives Nick Spain, human resources vice president, Renu Aggarwal, talent acquisition director, and Heather Erickson, organizational development director, about the opportunities and challenges posed by heightened demand for talent amid a pandemic.

To enable the efficient operation of unmanned aerial vehicles (UAVs) in instances where a global localization system (GPS) or an external positioning device (e.g., a laser reflector) is unavailable, researchers must develop techniques that automatically estimate a robot’s pose. If the environment in which a drone operates does not change very often and one is able to build a 3D map of this environment, map-based robot localization techniques can be fairly effective.

Ideally, map-based pose estimation approaches should be efficient, robust and reliable, as they should rapidly send a robot the information it needs to plan its future actions and movements. 3D light detection and ranging (LIDAR) systems are particularly promising map-based localization systems, as they gather a rich pool of 3D information, which drones can then use for localization.

Researchers at Universidad Pablo de Olavide in Spain have recently developed a new framework for map-based localization called direct LIDAR localization (DLL). This approach, presented in a paper pre-published on arXiv, could overcome some of the limitations of other LIDAR localization techniques introduced in the past.

Despite virtual reality (VR) technology being more affordable than ever, developers have yet to achieve a sense of full immersion in a digital world. Among the greatest challenges is making the user feel as if they are walking.

Now, researchers from the Toyohashi University of Technology and The University of Tokyo in Japan have published a paper to the journal Frontiers in Virtual Reality describing a custom-built platform that aims to replicate the sensation of walking in VR, all while sitting motionlessly in a chair.

“Walking is a fundamental and fun activity for human in everyday life. Therefore, it is very worthwhile to provide a high-quality walking experience in a VR space,” says Yusuke Matsuda.

A breakthrough astrophysics code, named Octo-Tiger, simulates the evolution of self-gravitating and rotating systems of arbitrary geometry using adaptive mesh refinement and a new method to parallelize the code to achieve superior speeds.

This new code to model stellar collisions is more expeditious than the established code used for . The research came from a unique collaboration between experimental computer scientists and astrophysicists in the Louisiana State University Department of Physics & Astronomy, the LSU Center for Computation & Technology, Indiana University Kokomo and Macquarie University, Australia, culminating in over of a year of benchmark testing and scientific simulations, supported by multiple NSF grants, including one specifically designed to break the barrier between computer science and astrophysics.

“Thanks to a significant effort across this collaboration, we now have a reliable computational framework to simulate stellar mergers,” said Patrick Motl, professor of physics at Indiana University Kokomo. “By substantially reducing the to complete a simulation, we can begin to ask new questions that could not be addressed when a single-merger simulation was precious and very time consuming. We can explore more parameter space, examine a simulation at very high spatial resolution or for longer times after a merger, and we can extend the simulations to include more complete physical models by incorporating radiative transfer, for example.”