Toggle light / dark theme

“Think what we can do if we teach a quantum computer to do statistical mechanics,” posed Michael McGuigan, a computational scientist with the Computational Science Initiative at the U.S. Department of Energy’s Brookhaven National Laboratory.

At the time, McGuigan was reflecting on Ludwig Boltzmann and how the renowned physicist had to vigorously defend his theories of . Boltzmann, who proffered his ideas about how atomic properties determine physical properties of matter in the late 19th century, had one extraordinarily huge hurdle: atoms were not even proven to exist at the time. Fatigue and discouragement stemming from his peers not accepting his views on atoms and physics forever haunted Boltzmann.

Today, Boltzmann’s factor, which calculates the probability that a system of particles can be found in a specific energy state relative to zero energy, is widely used in physics. For example, Boltzmann’s factor is used to perform calculations on the world’s largest supercomputers to study the behavior of atoms, molecules, and the quark “soup” discovered using facilities such as the Relativistic Heavy Ion Collider located at Brookhaven Lab and the Large Hadron Collider at CERN.

Circa 2017


Thousands of years of human knowledge has been learned and surpassed by the world’s smartest computer in just 40 days, a breakthrough hailed as one of the greatest advances ever in artificial intelligence.

Google DeepMind amazed the world last year when its AI programme AlphaGo beat world champion Lee Sedol at Go, an ancient and complex game of strategy and intuition which many believed could never be cracked by a machine.

AlphaGo was so effective because it had been programmed with millions of moves of past masters, and could predict its own chances of winning, adjusting its game-plan accordingly.

Biological organisms have certain useful attributes that synthetic robots do not, such as the abilities to heal, adapt to new situations, and reproduce. Yet molding biological tissues into robots or tools has been exceptionally difficult to do: Experimental techniques, such as altering a genome to make a microbe perform a specific task, are hard to control and not scalable.

Now, a team of scientists at the University of Vermont and Tufts University in Massachusetts has used a supercomputer to design novel lifeforms with specific functions, then built those organisms out of frog cells.

The new, AI-designed biological bots crawl around a petri dish and heal themselves. Surprisingly, the biobots also spontaneously self-organize and clear their dish of small trash pellets.

The development of technologies which can process information based on the laws of quantum physics are predicted to have profound impacts on modern society.

For example, quantum computers may hold the key to solving problems that are too complex for today’s most powerful supercomputers, and a quantum internet could ultimately protect the worlds information from malicious attacks.

However, these technologies all rely on “,” which is typically encoded in single quantum particles that are extremely difficult to control and measure.

Think of the human brain as an immensely powerful supercomputer. But as one of the most complex systems in Nature, there’s still much to learn about how it works. That’s why researchers from the Human Brain Project are attempting to unravel even more of its mysteries. However, most neuroscientists still believe that consciousness is generated in our brains, trying to justify their chosen profession as the only key to our experience of the world. It is not. We humans don’t live in a vacuum, we are not “brains in a vat,” so to speak. Just like your smartphone, your brain is a ‘bio’-logical computing device of your mind, an interface into physical reality. Our minds are connected to the broader mind-network, as computers in the Cloud. Consciousness is “non-local” Cloud, our brain-mind systems are receivers, processors and transmitters of information within that Cloud. So, a truly multidisciplinary and computationalist approach is required to crack the neural code and reverse-engineer consciousness in AI and cybernetic systems. We shouldn’t be surprised if all that hype about testing for the “seat of consciousness” could only end up refining our understanding of neural correlates — not how consciousness originates in the brain because it’s not its origin there. The Internet or a cellular network is not generated by your smartphone — only processed by it. Species-wide mind-networks are ubiquitous in Nature. What’s different with humans is that the forthcoming cybernetic mediation could become synthetic telepathy and beyond that — the emergence of one global mind, the Syntellect Emergence (cf. The Syntellect Hypothesis) #consciousness #HumanBrainProject


In episode four of Bloomberg’s Moonshot, see how 500 scientists in 100 universities are spending $1.1 billion on the Human Brain Project.

Using a supercomputing system, MIT researchers have developed a model that captures what web traffic looks like around the world on a given day, which can be used as a measurement tool for internet research and many other applications.

Understanding patterns at such a large scale, the researchers say, is useful for informing policy, identifying and preventing outages, defending against cyberattacks, and designing more efficient computing infrastructure. A paper describing the approach was presented at the recent IEEE High Performance Extreme Computing Conference.

For their work, the researchers gathered the largest publicly available internet traffic dataset, comprising 50 billion data packets exchanged in different locations across the globe over a period of several years.

For the first time ever, a quantum computer has performed a computational task that would be essentially impossible for a conventional computer to complete, according to a team from Google.

Scientists and engineers from the company’s lab in Santa Barbara announced the milestone in a report published Wednesday in the journal Nature. They said their machine was able to finish its job in just 200 seconds—and that the world’s most powerful supercomputers would need 10,000 years to accomplish the same task.

The task itself, which involved executing a randomly chosen sequence of instructions, does not have any particular practical uses. But experts say the achievement is still significant as a demonstration of the future promise of .