Toggle light / dark theme

Researchers have discovered the most precise way to control individual ions using holographic optical engineering technology.

The new technology uses the first known holographic optical engineering device to control trapped ion qubits. This technology promises to help create more precise controls of qubits that will aid the development of quantum industry-specific hardware to further new quantum simulation experiments and potentially quantum error correction processes for trapped ion qubits.

“Our algorithm calculates the hologram’s profile and removes any aberrations from the light, which lets us develop a highly precise technique for programming ions,” says lead author Chung-You Shih, a Ph.D. student at the University of Waterloo’s Institute for Quantum Computing (IQC).

Paper references for Levine’s Phenotypic Age calculator and aging.ai:

An epigenetic biomarker of aging for lifespan and healthspan:
https://pubmed.ncbi.nlm.nih.gov/29676998/

Population Specific Biomarkers of Human Aging: A Big Data Study Using South Korean, Canadian, and Eastern European Patient Populations:
https://pubmed.ncbi.nlm.nih.gov/29340580/

Protocol to reverse engineer Hamiltonian models advances automation of quantum devices.

Scientists from the University of Bristol ’s Quantum Engineering Technology Labs (QETLabs) have developed an algorithm that provides valuable insights into the physics underlying quantum systems — paving the way for significant advances in quantum computation and sensing, and potentially turning a new page in scientific investigation.

In physics, systems of particles and their evolution are described by mathematical models, requiring the successful interplay of theoretical arguments and experimental verification. Even more complex is the description of systems of particles interacting with each other at the quantum mechanical level, which is often done using a Hamiltonian model. The process of formulating Hamiltonian models from observations is made even harder by the nature of quantum states, which collapse when attempts are made to inspect them.

Machine learning is capable of doing all sorts of things as long as you have the data to teach it how. That’s not always easy, and researchers are always looking for a way to add a bit of “common sense” to AI so you don’t have to show it 500 pictures of a cat before it gets it. Facebook’s newest research takes a big step toward reducing the data bottleneck.

The company’s formidable AI research division has been working for years now on how to advance and scale things like advanced computer vision algorithms, and has made steady progress, generally shared with the rest of the research community. One interesting development Facebook has pursued in particular is what’s called “semi-supervised learning.”

Generally when you think of training an AI, you think of something like the aforementioned 500 pictures of cats — images that have been selected and labeled (which can mean outlining the cat, putting a box around the cat or just saying there’s a cat in there somewhere) so that the machine learning system can put together an algorithm to automate the process of cat recognition. Naturally if you want to do dogs or horses, you need 500 dog pictures, 500 horse pictures, etc. — it scales linearly, which is a word you never want to see in tech.

Scientists from the University of Bristol’s Quantum Engineering Technology Labs (QETLabs) have developed an algorithm that provides valuable insights into the physics underlying quantum systems—paving the way for significant advances in quantum computation and sensing, and potentially turning a new page in scientific investigation.

Consciousness remains scientifically elusive because it constitutes layers upon layers of non-material emergence: Reverse-engineering our thinking should be done in terms of networks, modules, algorithms and second-order emergence — meta-algorithms, or groups of modules. Neuronal circuits correlate to “immaterial” cognitive modules, and these cognitive algorithms, when activated, produce meta-algorithmic conscious awareness and phenomenal experience, all in all at least two layers of emergence on top of “physical” neurons. Furthermore, consciousness represents certain transcendent aspects of projective ontology, according to the now widely accepted Holographic Principle.

#CyberneticTheoryofMind


There’s no shortage of workable theories of consciousness and its origins, each with their own merits and perspectives. We discuss the most relevant of them in the book in line with my own Cybernetic Theory of Mind that I’m currently developing. Interestingly, these leading theories, if metaphysically extended, in large part lend support to Cyberneticism and Digital Pantheism which may come into scientific vogue with the future cyberhumanity.

Quantum simulators are a strange breed of systems for purposes that might seem a bit nebulous from the outset. These are often HPC clusters with fast interconnects and powerful server processors (although not usually equipped with accelerators) that run a literal simulation of how various quantum circuits function for design and testing of quantum hardware and algorithms. Quantum simulators do more than just test. They can also be used to emulate quantum problem solving and serve as a novel approach to tackling problems without all the quantum hardware complexity.

Despite the various uses, there’s only so much commercial demand for quantum simulators. Companies like IBM have their own internally and for others, Atos/Bull have created these based on their big memory Sequanna systems but these are, as one might imagine, niche machines for special purposes. Nonetheless, Nvidia sees enough opportunity in this arena to make an announcement at their GTC event about the performance of quantum simulators using the DGX A100 and its own custom-cooked quantum development software stack, called CuQuantum.

After all, it is probably important for Nvidia to have some kind of stake in quantum before (and if) it ever really takes off, especially in large-scale and scientific computing. What better way to get an insider view than to work with quantum hardware and software developers who are designing better codes and qubits via a benchmark and testing environment?

As content moderation continues to be a critical aspect of how social media platforms work — one that they may be pressured to get right, or at least do better in tackling — a startup that has built a set of data and image models to help with that, along with any other tasks that require automatically detecting objects or text, is announcing a big round of funding.

Hive, which has built a training data trove based on crowdsourced contributions from some 2 million people globally, which then powers a set of APIs that can be used to identify automatically images of objects, words and phrases — a process used not just in content moderation platforms, but also in building algorithms for autonomous systems, back-office data processing, and more — has raised $85 million in a Series D round of funding that the startup has confirmed values it at $2 billion.

“At the heart of what we’re doing is building AI models that can help automate work that used to be manual,” said Kevin Guo, Hive’s co-founder and CEO. “We’ve heard about RPA and other workflow automation, and that is important too but what that has also established is that there are certain things that humans should not have to do that is very structural, but those systems can’t actually address a lot of other work that is unstructured.” Hive’s models help bring structure to that other work, and Guo claims they provide “near human level accuracy.”