Archive for the ‘supercomputing’ category: Page 50

Apr 5, 2016

Here’s how Nvidia is powering an autonomous, electric race car

Posted by in categories: robotics/AI, supercomputing, transportation

Could we see race car driver careers become all AI? Nvidia is testing the concept.

Formula E is going completely autonomous with the all-new Roborace series slated for the upcoming race season. At its GTC developer conference, Nvidia announced these autonomous, electric race cars will be powered by Nvidia Drive PX 2, a supercomputer built for self-driving cars.

Drive PX 2 is powered by 12 CPU cores and four Pascal GPUs that provides eight teraflops of computer power. The supercomputer-in-a-box is vital to deep learning and trains artificial intelligence to adapts to different driving conditions, including asphalt, rain and dirt.

Continue reading “Here’s how Nvidia is powering an autonomous, electric race car” »

Apr 5, 2016

NVIDIA Reinvents The GPU For Artificial Intelligence (AI)

Posted by in categories: mobile phones, robotics/AI, supercomputing, transportation

At a time when PCs have become rather boring and the market has stagnated, the Graphics Processing Unit (GPU) has become more interesting and not for what it has traditionally done (graphical user interface), but for what it can do going forward. GPUs are a key enabler for the PC and workstation market, both for enthusiast seeking to increase graphics performance for games and developers and designers looking to create realistic new videos and images. However, the traditional PC market has been in decline for several years as consumer shift to mobile computing solutions like smartphones. At the same time, the industry has been working to expand the use of GPUs as a computing accelerator because of the massive parallel compute capabilities, often providing the horsepower for top supercomputers. NVIDIA has been a pioneer in this GPU compute market with its CUDA platform, enabling leading researchers to perform leading edge research and continue to develop new uses for GPU acceleration.

Now, the industry is looking to leverage over 40 years of GPU history and innovation to create more advanced computer intelligence. Through the use of sensors, increased connectivity, and new learning technique, researchers can enable artificial intelligence (AI) applications for everything from autonomous vehicles to scientific research. This, however, requires unprecedented levels of computing power, something the NVIDIA is driven to provide. At the GPU Technology Conference (GTC) in San Jose, California, NVIDIA just announced a new GPU platform that takes computing to the extreme. NVIDIA introduced the Telsa P100 platform. NVIDIA CEO Jen-Hsun Huang described the Tesla P100 as the first GPU designed for hyperscale datacenter applications. It features NVIDIA’s new Pascal GPU architecture, the latest memory and semiconductor process, and packaging technology – all to create the densest compute platform to date.

Read more

Mar 30, 2016

IBM’s ‘brain-inspired’ supercomputer to help watch over US nuclear arsenal

Posted by in categories: military, robotics/AI, supercomputing

Lawrence Livermore National Laboratory says collaboration project with IBM “could change how we do science”.

Read more

Mar 29, 2016

Researchers Found a Way to Shrink a Supercomputer to the Size of a Laptop

Posted by in categories: energy, nanotechnology, supercomputing

Scientists at the University of Lund in Sweden have found a way to use “biological motors” for parallel computing. The findings could mean vastly more powerful and energy efficient computers in a decade’s time.

Nanotechnologists at Lund University in Sweden have discovered a way to miniaturize the processing power that is found today only in the largest and most unwieldy of supercomputers. Their findings, which were published in the Proceedings of the National Academy of Sciences, point the way to a future when our laptops and other personal, handheld computing devices pack the computational heft of a Cray Titan or IBM Blue Gene/Q.

But the solution may be a little surprising.

Continue reading “Researchers Found a Way to Shrink a Supercomputer to the Size of a Laptop” »

Mar 29, 2016

Neuromorphic supercomputer has 16 million neurons

Posted by in categories: information science, neuroscience, robotics/AI, supercomputing

Today, Lawrence Livermore National Lab (LLNL) and IBM announced the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.

The $1 million computer has 16 IBM microprocessors designed to mimic the way the brain works.

IBM says it will be five to seven years before TrueNorth sees widespread commercial use, but the Lawrence Livermore test is a big step in that direction.

Continue reading “Neuromorphic supercomputer has 16 million neurons” »

Mar 28, 2016

IBM wants to accelerate AI learning with new processor tech

Posted by in categories: robotics/AI, supercomputing

Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for days at a time. Scientists from IBM’s T.J. Watson Research Center think they can cut the horsepower and learning times drastically using “resistive processing units,” theoretical chips that combine CPU and non-volatile memory. Those could accelerate data speeds exponentially, resulting in systems that can do tasks like “natural speech recognition and translation between all world languages,” according to the team.

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google’s DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU.

Read more

Mar 26, 2016

Toward a realistic cosmic evolution

Posted by in categories: evolution, space, supercomputing

Using the Piz Daint supercomputer, cosmologists at the University of Geneva are the first to simulate the structure of the universe in a way that consistently accounts for the general theory of relativity.

Read more

Mar 24, 2016

Modified NWChem Code Utilizes Supercomputer Parallelization

Posted by in categories: chemistry, climatology, evolution, materials, quantum physics, supercomputing

Quicker time to discovery. That’s what scientists focused on quantum chemistry are looking for. According to Bert de Jong, Computational Chemistry, Materials and Climate Group Lead, Computational Research Division, Lawrence Berkeley National Lab (LBNL), “I’m a computational chemist working extensively with experimentalists doing interdisciplinary research. To shorten time to scientific discovery, I need to be able to run simulations at near-real-time, or at least overnight, to drive or guide the next experiments.” Changes must be made in the HPC software used in quantum chemistry research to take advantage of advanced HPC systems to meet the research needs of scientists both today and in the future.

NWChem is a widely used open source software computational chemistry package that includes both quantum chemical and molecular dynamics functionality. The NWChem project started around the mid-1990s, and the code was designed from the beginning to take advantage of parallel computer systems. NWChem is actively developed by a consortium of developers and maintained by the Environmental Molecular Sciences Laboratory (EMSL) located at the Pacific Northwest National Laboratory (PNNL) in Washington State. NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

“Rapid evolution of the computational hardware also requires significant effort geared toward the modernization of the code to meet current research needs,” states Karol Kowalski, Capability Lead for NWChem Development at PNNL.

Continue reading “Modified NWChem Code Utilizes Supercomputer Parallelization” »

Mar 17, 2016

Supercomputer simulates whole-body blood flow

Posted by in categories: biotech/medical, physics, supercomputing

Physicists say a supercomputer simulation of blood flow around the entire human body is showing promise, based on an experimental test.

Read more

Mar 17, 2016

This Amazing Computer Chip Is Made of Live Brain Cells

Posted by in categories: neuroscience, supercomputing

A few years ago, researchers from Germany and Japan were able to simulate one percent of human brain activity for a single second. It took the processing power of one of the world’s most powerful supercomputers to make that happen.

Hands down, the human brain is by far the most powerful, energy efficient computer ever created.

So what if we could harness the power of the human brain by using actual brain cells to power the next generation of computers?

Continue reading “This Amazing Computer Chip Is Made of Live Brain Cells” »

Page 50 of 61First4748495051525354Last