Toggle light / dark theme

Great article about a mad-scientist whose vision caused the world to look cross-eyed. Many of us have been there before some time in our lives.

In 1981, Richard Feynman urged the world to build a quantum computer. In his own words.

“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

Read more

I personally can confirm that QC is not being worked on and advance by just a couple groups such as D-Wave and IBM. The questions/bumps in the road that we will all face is threefold:

1) how do we standardize the QC? right now (like most innovation) is done in siloes and limited cross-collaboration across government, labs & universities, and commercial companies. 2) governance and compliance; how will these need to change across multiple areas 3) id & mitigate all impacts instead of after deployment (don’t be reactive) because we will not have that luxury due to hackers.


There is a temptation to lump quantum computing in with technologies such as fusion power in the sense that both have been proposed for decades with the promise of tremendous leaps in performance.

Whilst fusion power continues to frustrate, there are signs of real progress being made in quantum computing. There is barely a tech giant in the world that doesn’t have dedicated teams working on the topic, and these teams are beginning to bring quantum computing out of the lab and into the real world.

At the forefront of this is IBM, who recently announced that they would connect up a quantum computer to the web and allow us to play with it. The project involves a 5 qubit machine, with a qubit allowing it to operate in both ‘0 and 1’ states at the same time, thus increasing its potential computational power enormously. A one qubit machine has roughly 16 possible states, but once you get over 300, you begin to exceed the number of atoms in the universe.

Read more

I know that I reported on this a few weeks ago; however, this article shares some additional insights on how this new method will enable more efficient smaller devices including promoting stabilization in Quantum Computing (QC)…


A multi-institutional team of researchers has discovered novel magnetic behavior on the surface of a specialized material that holds promise for smaller, more efficient devices and other advanced technology.

Researchers at the Department of Energy’s Oak Ridge National Laboratory, Massachusetts Institute of Technology and their collaborators used neutron scattering to reveal magnetic moments in hybrid topological insulator (TI) materials at room temperature, hundreds of degrees Fahrenheit warmer than the extreme sub-zero cold where the properties are expected to occur.

The discovery promises new opportunities for next-generation electronic and spintronic devices such as improved transistors and quantum computing technologies. Their research is discussed in a paper published in the journal Nature.

Read more

Movidius’ Myriad 2 vision processing chip (Photo: Movidius)

The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it’s about to injects smarts into every other object imaginable.

That’s because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that’s likely to lead to much smarter phones, drones, robots, cameras, wearables and more.

Read more

Google’s Project Soli was one of the highlights of the company’s developer conference last year, but there’s been little news about it since then.

The technology uses special radar-sensors packed in a tiny chip to detect a person’s physical movements (such as rubbing two fingers together), letting a person do things like turn the volume up on a radio without actual touching anything.

The recent news that Regina Dugan, the head of the Advanced Technology and Projects lab at Google that oversaw Soli, jumped ship to go work at rival Facebook, did not seem like a good sign for the future of Soli. And with Microsoft’s recent unveiling of similar technology, Google’s impressive product demo last year seemed like it might not make it out of the lab.

Read more

Harold Cohen, an abstract painter who developed Aaron, one of the first and eventually one of the most complex computer software programs for generating works of art, died on April 27 at his home in Encinitas, Calif. He was 87.

The cause was congestive heart failure, his son, Paul, said.

Mr. Cohen was a painter growing weary with the traditional practice of art in the late 1960s when he taught himself, out of curiosity, how to program a computer.

Read more

Garage startup (credit: Chase Dittmer)

By Director of UWA Centre for Software Practice, University of Western Australia

Pharmaceutical companies typically develop new drugs with thousands of staff and budgets that run into the billions of dollars. One estimate puts the cost of bringing a new drug to market at $2.6 billion with others suggesting that it could be double that cost at $5 billion.

One man, Professor Atul Butte, director of the University of California Institute of Computational Health Sciences, believes that like other Silicon Valley startups, almost anyone can bring a drug to market from their garage with just a computer, the internet, and freely available data.

In a talk given at the Science on the Swan conference held in Perth this week, Professor Butte outlined the process for an audience of local and international scientists and medics.

Read more

Researchers at the University of Liverpool have developed a set of algorithms that will help teach computers to process and understand human languages.

Whilst mastering is easy for humans, it is something that computers have not yet been able to achieve. Humans understand language through a variety of ways for example this might be through looking up it in a dictionary, or by associating it with words in the same sentence in a meaningful way.

The algorithms will enable a to act in much the same way as a human would when encountered with an unknown word. When the computer encounters a word it doesn’t recognise or understand, the algorithms mean it will look up the word in a dictionary (such as the WordNet), and tries to guess what other words should appear with this unknown word in the text.

Read more