Toggle light / dark theme

More insights around the logical quantum gate for photons discovered by Max Planck Institute of Quantum Optics (MPQ). Being able to leverage this gate enables Qubits in transmission and processing can be more controlled and manipulated through this discovery, and places us closer to a stable Quantum Computing environment.


MPQ scientists take an important step towards a logical quantum gate for photons.

Scientists from all over the world are working on concepts for future quantum computers and their experimental realization. Commonly, a typical quantum computer is considered to be based on a network of quantum particles that serve for storing, encoding and processing quantum information. In analogy to the case of a classical computer a quantum logic gate that assigns output signals to input signals in a deterministic way would be an essential building block. A team around Dr. Stephan Dürr from the Quantum Dynamics Division of Prof. Gerhard Rempe at the Max Planck Institute of Quantum Optics has now demonstrated in an experiment how an important gate operation — the exchange of the binary bit values 0 and 1 — can be realized with single photons. A first light pulse containing one photon only is stored as an excitation in an ultracold cloud of about 100,000 rubidium atoms.

Read more

Making software immortal; Raytheon is trying to make it a reality.


CAMBRIDGE, Mass., May 2, 2016 /PRNewswire/ — A team led by Raytheon BBN Technologies is developing methods to make mobile applications viable for up to 100 years, despite changes in hardware, operating system upgrades and supporting services. The U.S. Air Force is sponsoring the four-year, $7.8 million contract under the Defense Advanced Research Projects Agency’s Building Resource Adaptive Software Systems program.

“Mobile apps are pervasive in the military, but frequent operating system upgrades, new devices and changing missions and environments require manual software engineering that is expensive and causes unacceptable delays,” said Partha Pal, principal scientist at Raytheon BBN. “We are developing techniques to eliminate these interruptions by identifying the way these changes affect application functionality and modifying the software.”

Read more

I am so glad to see this from Bill. Until we drastically improve the under pinning technology to an advance mature version of Quantum Computing; AI is not a threat in the non-criminal use. The only danger is when terrorists, drug cartels, and other criminals uses AI such as drones, robotics, bots, etc. to attack, burglarize, murder, apply their terror, etc.; and that is not AI doing these things on their own.


Munger, Gates on future of AI

Charlie Munger, Berkshire Hathaway vice-chairman shares his thoughts on American Express, Costco and IBM’s future working with artificial intelligence. And Bill Gates, explains why it will be a huge help.

Read more

Ask an Information Architect, CDO, Data Architect (Enterprise and non-Enterprise) they will tell you they have always known that information/ data is a basic staple like Electricity all along; and glad that folks are finally realizing it. So, the same view that we apply to utilities as core to our infrastructure & survival; we should also apply the same value and view about information. And, in fact, information in some areas can be even more important than electricity when you consider information can launch missals, cure diseases, make you poor or wealthy, take down a government or even a country.


What is information? Is it energy, matter, or something completely different? Although we take this word for granted and without much thought in today’s world of fast Internet and digital media, this was not the case in 1948 when Claude Shannon laid the foundations of information theory. His landmark paper interpreted information in purely mathematical terms, a decision that dematerialized information forever more. Not surprisingly, there are many nowadays that claim — rather unthinkingly — that human consciousness can be expressed as “pure information”, i.e. as something immaterial graced with digital immortality. And yet there is something fundamentally materialistic about information that we often ignore, although it stares us — literally — in the eye: the hardware that makes information happen.

As users we constantly interact with information via a machine of some kind, such as our laptop, smartphone or wearable. As developers or programmers we code via a computer terminal. As computer or network engineers we often have to wade through the sheltering heat of a server farm, or deal with the material properties of optical fibre or copper in our designs. Hardware and software are the fundamental ingredients of our digital world, both necessary not only in engineering information systems but in interacting with them as well. But this status quo is about to be massively disrupted by Artificial Intelligence.

A decade from now the postmillennial youngsters of the late 2020s will find it hard to believe that once upon a time the world was full of computers, smartphones and tablets. And that people had to interact with these machines in order to access information, or build information systems. For them information would be more like electricity: it will always be there, and always available to power whatever you want to do. And this will be possible because artificial intelligence systems will be able to manage information complexity so effectively that it will be possible to deliver the right information at the right person at the right time, almost at an instant. So let’s see what that would mean, and how different it would be from what we have today.

Read more

Cambridge University spin-out Optalysys has been awarded a $350k grant for a 13-month project from the US Defense Advanced Research Projects Agency (DARPA). The project will see the company advance their research in developing and applying their optical co-processing technology to solving complex mathematical equations. These equations are relevant to large-scale scientific and engineering simulations such as weather prediction and aerodynamics.

The Optalysys technology is extremely energy efficient, using light rather than electricity to perform intensive mathematical calculations. The company aims to provide existing computer systems with massively boosted processing capabilities, with the aim to eventually reach exaFLOP rates (a billion billion calculations per second). The technology operates at a fraction of the energy cost of conventional high-performance computers (HPCs) and has the potential to operate at orders of magnitude faster.

In April 2015 Optalysys announced that they had successfully built a scaleable, lens-less optical processing prototype that can perform mathematical functions. Codenamed Project GALELEO, the device demonstrates that second order derivatives and correlation pattern matching can be performed optically in a scaleable design.

Read more

I forgot Sony in the list of contact lens patents. Sony’s new camera contact patent. So, we have Google, Huawei, and Samsung with AR and CPU patents and Sony’s patents on the camera. Waiting for Apple and my favorite Microsoft’s announcements.


Sony has joined Google and Samsung in the world of contact lens camera patents, Sony’s version also has zoom and aperture control built in.

Read more

Nice; however, I see also 3D printing along with machine learning being part of any cosmetic procedures and surgeries.


With an ever-increasing volume of electronic data being collected by the healthcare system, researchers are exploring the use of machine learning—a subfield of artificial intelligence—to improve medical care and patient outcomes. An overview of machine learning and some of the ways it could contribute to advancements in plastic surgery are presented in a special topic article in the May issue of Plastic and Reconstructive Surgery®, the official medical journal of the American Society of Plastic Surgeons (ASPS).

“Machine learning has the potential to become a powerful tool in plastic surgery, allowing surgeons to harness complex clinical data to help guide key clinical decision-making,” write Dr. Jonathan Kanevsky of McGill University, Montreal, and colleagues. They highlight some key areas in which machine learning and “Big Data” could contribute to progress in plastic and reconstructive surgery.

Machine Learning Shows Promise in Plastic Surgery Research and Practice

Machine learning analyzes historical data to develop algorithms capable of knowledge acquisition. Dr. Kanevsky and coauthors write, “Machine learning has already been applied, with great success, to process large amounts of complex data in medicine and surgery.” Projects with healthcare applications include the IBM Watson Health cognitive computing system and the American College of Surgeons’ National Surgical Quality Improvement Program.

Read more