Toggle light / dark theme

Storage tech doesn’t get much better than this. Scientists at TU Delft have developed a technique that uses chlorine atom positions as data bits, letting the team fit 1KB of information into an area just 100 nanometers wide. That may not sound like much, but it amounts to a whopping 62.5TB per square inch — about 500 times denser than the best hard drives. The scientists coded their data by using a scanning tunneling microscope to shuffle the chlorine atoms around a surface of copper atoms, creating data blocks where QR code -style markers indicate both their location and whether or not they’re in good condition.

Not surprisingly, the technology isn’t quite ready for prime time. At the moment, this storage only works in extremely clean conditions, and then only in extreme cold (77 kelvin, or −321F). However, the approach can easily scale to large data sizes, even if the copper is flawed. Researchers suspect that it’s just a matter of time before their storage works in normal conditions. If and when it does, you could see gigantic capacities even in the smallest devices you own — your phone could hold dozens of terabytes in a single chip.

Read more

Luv it; and this is only the beginning too.


In the continued effort to make a viable quantum computer, scientists assert that they have made the first scalable quantum simulation of a molecule.

Quantum computing, if it is ever realized, will revolutionize computing as we know it, bringing us great leaps forward in relation to many of today’s computing standards. However, such computers have yet to be fabricated, as they represent monumental engineering challenges (though we have made much progress in the past ten years).

Case in point, scientists now assert that, for the first time ever, using this technology, they have made a scalable quantum simulation of a molecule. The paper appears in the open access journal Physical Review X.

Read more

AI and Quality Control in Genome data are made for each other.


A new study published in The Plant Journal helps to shed light on the transcriptomic differences between different tissues in Arabidopsis, an important model organism, by creating a standardized “atlas” that can automatically annotate samples to include lost metadata such as tissue type. By combining data from over 7000 samples and 200 labs, this work represents a way to leverage the increasing amounts of publically available ‘omics data while improving quality control, to allow for large scale studies and data reuse.

“As more and more ‘omics data are hosted in the public databases, it become increasingly difficult to leverage those data. One big obstacle is the lack of consistent metadata,” says first author and Brookhaven National Laboratory research associate Fei He. “Our study shows that metadata might be detected based on the data itself, opening the door for automatic metadata re-annotation.”

The study focuses on data from microarray analyses, an early high-throughput genetic analysis technique that remains in common use. Such data are often made publically available through tools such as the National Center for Biotechnology Information’s Gene Expression Omnibus (GEO), which over time accumulates vast amounts of information from thousands of studies.

Read more

Horizon Robotics, led by Yu Kai, Baidu’s former deep learning head, is developing AI chips and software to mimic how the human brain solves abstract tasks, such as voice and image recognition. The company believes that this will provide more consistent and reliable services than cloud based systems.

The goal is to enable fast and intelligent responses to user commands, with out an internet connection, to control appliances, cars, and other objects. Health applications are a logical next step, although not yet discussed.

Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center.

Read more