Toggle light / dark theme

[Figure about depicts a layout, showing two ‘somas’, or circuits that simulate the basic functions of a neuron. The green circles play the role of synapses. From presentation of K.K. Likharev, used with permission.]

One possible layout is shown above. Electronic devices called ‘somas’ play the role of the neuron’s cell body, which is to add up the inputs and fire an output. In neuromorphic hardware, somas may mimic neurons with several different levels of sophistication, depending on what is required for the task at hand. For instance, somas may generate spikes (sequences of pulses) just like neurons in the brain. There is growing evidence that sequences of spikes in the brain carry more information than just the average firing rate alone, which previously had been considered the most important quantity. Spikes are carried through the two types of neural wires, axons and dendrites, which are represented by the red and blue lines in figure 2. The green circles are connections between these wires that play the role of synapses. Each of these ‘latching switches’ must be able to hold a ‘weight’, which is encoded in either a variable capacitance or variable resistance. In principle, memristors would be an ideal component here, if one could be developed that could be mass produced. Crucially, all of the crossnet architecture can be implemented in traditional silicon-based (“CMOS”-like) technology. Each crossnet (as shown in the figure) is designed so they can be stacked, with additional wires connecting somas on different layers. In this way, neuromorphic crossnet technology can achieve component densities that rival the human brain.

Likarev’s design is still theoretical, but there are already several neuromorphic chips in production, such as IBM’s TrueNorth chip, which features spiking neurons, and Qualcomm’s “Zeroeth” project. NVIDIA is currently making major investments in deep learning hardware, and the next generation of NVIDIA devices dedicated for deep learning will likely look closer to neuromorphic chips than traditional GPUs. Another important player is the startup Nervana systems, which was recently acquired by Intel for $400 million. Many governments are are investing large amounts of money into academic research on neuromorphic chips as well. Prominent examples include the EU’s BrainScaleS project, the UK’s SpiNNaker project, and DARPA’s SyNAPSE program.

Read more

35 percent efficiency.


The cost of solar power is beginning to reach price parity with cheaper fossil fuel-based electricity in many parts of the world, yet the clean energy source still accounts for slightly more than 1% of the world’s electricity mix.

To boost global solar power generation, researchers must overcome some of the technological limitations that are preventing solar power from scaling up even further, which includes the inability to develop very high-efficiency solar cells – solar cells capable of converting a significant amount of sunlight into usable electrical energy – at very low costs.

A team of researchers from the Masdar Institute and the Massachusetts Institute of Technology (MIT) may have found a way around the seemingly inseparable high-efficiency and high-cost linkage through an innovative multi-junction solar cell that leverages a unique “step-cell” design approach and low cost silicon. The new step-cell combines two different layers of sunlight-absorbing material to harvest a broader range of the sun’s energy while using a novel, low-cost manufacturing process.

UC Davis has developed the KiloCore, a CPU that has 1000 cores suited for parallel tasks like encryption, crunching scientific data, and encoding videos.

Processor technology has certainly come far, with a host of different materials and techniques being implemented to increase speed and power. And now, we have a new kind of development. A team of scientists at UC Davis made the world’s first 1000-core processor.

The team has unveiled the KiloCore, a CPU that has 1000 cores and all the speed that come with that kind of power. The chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors.

Read more

This new computer system is 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Using only links and rotary joints, this “molecular mechanical computer” removes the need for parts that create friction and generate heat.

The trend for computing, and for technology in general, really consists of just one word: Smaller. Previously, technology that could fit on your desk was the rage. Then it became tech that fit in your bag. Then the palm of your hand. Now, scientists are playing with even smaller technology, down to the molecular size.

Scientists have developed a computer system that can, theoretically, be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Using only links and rotary joints, this “molecular mechanical computer” removes the need for gears, clutches, switches, springs, and other parts that create friction and generate heat.

Read more