Toggle light / dark theme

Researchers make leap in measuring quantum states

Another major leap forward in controlling system noise in QC.


A breakthrough into the full characterisation of quantum states has been published today as a Editors’ Suggestion in the journal Physical Review Letters.

The full characterisation (tomography) of quantum states is a necessity for future quantum computing. However, standard techniques are inadequate for the large quantum bit-strings necessary in full scale quantum computers.

A research team from the Quantum Photonics Laboratory at RMIT University and EQuS at the University of Sydney has demonstrated a new technique for quantum tomography — self-guided quantum tomography — which opens future pathways for characterisation of large quantum states and provides robustness against inevitable system noise.

World’s most powerful quantum computer now online at USC

Good for USC.


Following a recent upgrade, the USC-Lockheed Martin Quantum Computing Center (QCC) based at the USC Information Sciences Institute (ISI) is now the leader in quantum processing capacity.

With the upgrade — to 1,098 qubits from 512 — the D-Wave 2X™ processor is enabling QCC researchers to continue their efforts to close the gap between academic research in quantum computation and real-world critical problems.

The new processor will be used to study how and whether quantum effects can speed up the solution of tough optimization, machine learning and sampling problems. Machine-learning algorithms are widely used in artificial intelligence tasks.

Atom-scale storage holds 62TB in a square inch

Storage tech doesn’t get much better than this. Scientists at TU Delft have developed a technique that uses chlorine atom positions as data bits, letting the team fit 1KB of information into an area just 100 nanometers wide. That may not sound like much, but it amounts to a whopping 62.5TB per square inch — about 500 times denser than the best hard drives. The scientists coded their data by using a scanning tunneling microscope to shuffle the chlorine atoms around a surface of copper atoms, creating data blocks where QR code -style markers indicate both their location and whether or not they’re in good condition.

Not surprisingly, the technology isn’t quite ready for prime time. At the moment, this storage only works in extremely clean conditions, and then only in extreme cold (77 kelvin, or −321F). However, the approach can easily scale to large data sizes, even if the copper is flawed. Researchers suspect that it’s just a matter of time before their storage works in normal conditions. If and when it does, you could see gigantic capacities even in the smallest devices you own — your phone could hold dozens of terabytes in a single chip.

One of the First Real-World Quantum Computer Applications Was Just Realized

Luv it; and this is only the beginning too.


In the continued effort to make a viable quantum computer, scientists assert that they have made the first scalable quantum simulation of a molecule.

Quantum computing, if it is ever realized, will revolutionize computing as we know it, bringing us great leaps forward in relation to many of today’s computing standards. However, such computers have yet to be fabricated, as they represent monumental engineering challenges (though we have made much progress in the past ten years).

Case in point, scientists now assert that, for the first time ever, using this technology, they have made a scalable quantum simulation of a molecule. The paper appears in the open access journal Physical Review X.

New study uses computer learning to provide quality control for genetic databases

AI and Quality Control in Genome data are made for each other.


A new study published in The Plant Journal helps to shed light on the transcriptomic differences between different tissues in Arabidopsis, an important model organism, by creating a standardized “atlas” that can automatically annotate samples to include lost metadata such as tissue type. By combining data from over 7000 samples and 200 labs, this work represents a way to leverage the increasing amounts of publically available ‘omics data while improving quality control, to allow for large scale studies and data reuse.

“As more and more ‘omics data are hosted in the public databases, it become increasingly difficult to leverage those data. One big obstacle is the lack of consistent metadata,” says first author and Brookhaven National Laboratory research associate Fei He. “Our study shows that metadata might be detected based on the data itself, opening the door for automatic metadata re-annotation.”

The study focuses on data from microarray analyses, an early high-throughput genetic analysis technique that remains in common use. Such data are often made publically available through tools such as the National Center for Biotechnology Information’s Gene Expression Omnibus (GEO), which over time accumulates vast amounts of information from thousands of studies.