Toggle light / dark theme

Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.

Existing designs for mechanical computing can be vastly improved upon in terms of the number of parts required to implement a complete computational system. Only two types of parts are required: Links, and rotary joints. Links are simply stiff, beam-like structures. Rotary joints are joints that allow rotational movement in a single plane.

Simple logic and conditional routing can be accomplished using only links and rotary joints, which are solidly connected at all times. No gears, clutches, switches, springs, or any other mechanisms are required. An actual system does not require linear slides.

Read more

Conditions in the vast universe can be quite extreme: Violent collisions scar the surfaces of planets. Nuclear reactions in bright stars generate tremendous amounts of energy. Gigantic explosions catapult matter far out into space. But how exactly do processes like these unfold? What do they tell us about the universe? And could their power be harnessed for the benefit of humankind?

To find out, researchers from the Department of Energy’s SLAC National Accelerator Laboratory perform sophisticated experiments and computer simulations that recreate violent cosmic conditions on a small scale in the lab.

“The field of is growing very rapidly, fueled by a number of technological breakthroughs,” says Siegfried Glenzer, head of SLAC’s High Energy Density Science Division. “We now have high-power lasers to create extreme states of matter, cutting-edge X-ray sources to analyze these states at the atomic level, and high-performance supercomputers to run complex simulations that guide and help explain our experiments. With its outstanding capabilities in these areas, SLAC is a particularly fertile ground for this type of research.”

Read more

Newswise — The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go—an achievement demonstrating the explosive speed of development in machine capabilities.

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the prestigious science journal Nature.

Read more

Lov’n Quantum Espresso


Researchers use specialized software such as Quantum ESPRESSO and a variety of HPC software in conducting quantum materials research. Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves and pseudo potentials. Quantum ESPRESSO is coordinated by the Quantum ESPRESSO Foundation and has a growing world-wide user community in academic and industrial research. Its intensive use of dense mathematical routines makes it an ideal candidate for many-core architectures, such as the Intel Xeon Phi coprocessor.

The Intel Parallel Computing Centers at Cineca and Lawrence Berkeley National Lab (LBNL) along with the National Energy Research Scientific Computing Center (NERSC) are at the forefront in using HPC software and modifying Quantum ESPRESSO (QE) code to take advantage of Intel Xeon processors and Intel Xeon Phi coprocessors used in quantum materials research. In addition to Quantum ESPRESSO, the teams use tools such as Intel compilers, libraries, Intel VTune and OpenMP in their work. The goal is to incorporate the changes they make to Quantum ESPRESSO into the public version of the code so that scientists can gain from the modification they have made to improve code optimization and parallelization without requiring researchers to manually modify legacy code.

Figure 2: The electronic density of states calculated by Quantum ESPRESSO. This is one of the key properties that permit researchers to understand the electrical properties of the device. Courtesy of 1)  A. Calzolari - National Research Council of Italy - Institute for Nanoscience (CNR-NANO), 2)  R. Colle – University of Bologna (Italy), 3)  C. Cavazzoni – Cineca (Italy)  and 4)  E. Pascolo – OGS (Italy)
Electrical conductivity of a PDI-FCN2 molecule.

Another pre-Quantum Computing interim solution for super computing. So, we have this as well as Nvidia’s GPU. Wonder who else?


In summer 2015, US president Barack Obama signed an order intended to provide the country with an exascale supercomputer by 2025. The machine would be 30 times more powerful than today’s leading system: China’s Tianhe-2. Based on extrapolations of existing electronic technology, such a machine would draw close to 0.5GW – the entire output of a typical nuclear plant. It brings into question the sustainability of continuing down the same path for gains in computing.

One way to reduce the energy cost would be to move to optical interconnect. In his keynote at OFC in March 2016, Professor Yasuhiko Arakawa of University of Tokyo said high performance computing (HPC) will need optical chip to chip communication to provide the data bandwidth for future supercomputers. But digital processing itself presents a problem as designers try to deal with issues such as dark silicon – the need to disable large portions of a multibillion transistor processor at any one time to prevent it from overheating. Photonics may have an answer there as well.

Optalysys founder Nick New says: “With the limits of Moore’s Law being approached, there needs to be a change in how things are done. Some technologies are out there, like quantum computing, but these are still a long way off.”

Read more

I see great potential for the TrueNorth chip as we migrate towards Quantum & Singularity. TrueNorth is an interim chip that assists researchers, engineers, etc. in their efforts to mimic the human brain’s nuero sensors and processing for robotics, BMI technology, etc.


The new IBM supercomputer chip mimics the human brain by using an architecture with 1 million neurons. Nevertheless, its true purpose remains in question for a project with massive public funding.

Read more

Interesting; however, I can not wait to see Nividia’s new car especially with their new GPU chip & DGX-1 technology.


While companies such as Google chase the fully autonomous car, Toyota is taking a more measured approach toward a “guardian angel” car that would seize control only when an accident is imminent.

But as starkly different as those approaches are, they both will require a wide range of data-intensive technologies, according to Gill Pratt (pictured), chief executive officer of the Toyota Research Institute, a research center focused on AI and robotics. He spoke at the GPU Technology Conference in San Jose today.

Toyota has made a huge bet– a billion dollars over five years, in fact–not only on semiautonomous cars but robots that could help older people with indoor mobility. The Toyota Research Institute, which will have facilities near Stanford University and the Massachusetts Institute of Technology, is intended to focus both on what Toyota calls outdoor mobility (cars) as well as indoor mobility (robots).