Toggle light / dark theme

The ground beneath our feet and under the ocean floor is an electrically-charged grid, the product of bacteria “exhaling” excess electrons through tiny nanowires in an environment lacking oxygen.

Yale University researchers have been studying ways to enhance this natural electrical conductivity within nanowires 1/100,000th width of a human hair by identifying the mechanism of electron flow.

Bacteria producing nanowires made up of cytochrome OmcS. (Image: Ella Maru Studio)

Forty years after it first began to dabble in quantum computing, IBM is ready to expand the technology out of the lab and into more practical applications — like supercomputing! The company has already hit a number of development milestones since it released its previous quantum roadmap in 2020, including the 127-qubit Eagle processor that uses quantum circuits and the Qiskit Runtime API. IBM announced on Wednesday that it plans to further scale its quantum ambitions and has revised the 2020 roadmap with an even loftier goal of operating a 4,000-qubit system by 2025.

Before it sets about building the biggest quantum computer to date, IBM plans release its 433-qubit Osprey chip later this year and migrate the Qiskit Runtime to the cloud in 2023, “bringing a serverless approach into the core quantum software stack,” per Wednesday’s release. Those products will be followed later that year by Condor, a quantum chip IBM is billing as “the world’s first universal quantum processor with over 1,000 qubits.”

This rapid four-fold jump in quantum volume (the number of qubits packed into a processor) will enable users to run increasingly longer quantum circuits, while increasing the processing speed — measured in CLOPS (circuit layer operations per second) — from a maximum of 2,900 OPS to over 10,000. Then it’s just a simple matter of quadrupaling that capacity in the span of less than 24 months.

For instance, when training a gestational age clock model from placental methylation, a sample can only be collected after delivery of the baby and the placenta. So most samples have a gestational age greater than 30 weeks, which corresponds to moderate preterm and full-term births. For samples with a further younger gestational age, they are scarce, which makes the sample distribution seriously biased to large gestational ages and impairs the ability of the trained model to predict small ones. However, differences in gestational age as small as one week can significantly influence neonatal morbidity and mortality and long-term outcomes [18 23]. Hence, the model’s accuracy across the whole gestational age range becomes essential.

To solve this problem, we developed the R package eClock (ensemble-based clock). It improves the traditional machine learning strategy in handling the imbalance problem of category data [24], and combines bagging and SMOTE (Synthetic Minority Over-sampling Technique) methods to adjust the biased age distribution and predict DNAm age with an ensemble model. This is the first time applying these techniques to the clock model, bringing a new framework for clock model construction. eClock also provides other functions, such as training the traditional clock model, displaying features, and converting methylation probe/gene/DMR (DNA methylation region) values. To test the performance of the package, we used 3 different datasets, and the results show that the package can effectively improve the clock model performance on rare samples.

At the time of writing, scientists and engineers still haven’t figured out how to replicate every computer component that currently exists within semiconductor processors. Computation is nonlinear. It requires that different signals interact with each other and change the outcomes of other components. You need to build logic gates in the same way that semiconductor transistors are used to create logic gates, but photons don’t behave in a way that naturally works with this approach.

This is where photonic logic comes into the picture. By using nonlinear optics it’s possible to build logic gates similar to those used in conventional processors. At least, in theory, it could be possible. There are many practical and technological hurdles to overcome before photonic computers play a significant role.

Leonard Susskind (Stanford University)
https://simons.berkeley.edu/events/quantum-colloquium-black-…ing-thesis.
Quantum Colloquium.

A few years ago three computer scientists named Adam Bouland, Bill Fefferman, and Umesh Vazirani, wrote a paper that promises to radically change the way we think about the interiors of black holes. Inspired by their paper I will explain how black holes threaten the QECTT, and how the properties of horizons rescue the thesis, and eventually make predictions for the complexity of extracting information from behind the black hole horizon. I’ll try my best to explain enough about black holes to keep the lecture self contained.

Panel featuring Scott Aaronson (UT Austin), Geoffrey Penington (UC Berkeley), and Edward Witten (IAS); Umesh Vazirani (UC Berkeley; moderator). 1:27:30.