Toggle light / dark theme

On the journey from gene to protein, a nascent RNA molecule can be cut and joined, or spliced, in different ways before being translated into a protein. This process, known as alternative splicing, allows a single gene to encode several different proteins. Alternative splicing occurs in many biological processes, like when stem cells mature into tissue-specific cells. In the context of disease, however, alternative splicing can be dysregulated. Therefore, it is important to examine the transcriptome—that is, all the RNA molecules that might stem from genes—to understand the root cause of a condition.

However, historically it has been difficult to “read” RNA molecules in their entirety because they are usually thousands of bases long. Instead, researchers have relied on so-called short-read RNA sequencing, which breaks RNA molecules and sequence them in much shorter pieces—somewhere between 200 to 600 bases, depending on the platform and protocol. Computer programs are then used to reconstruct the full sequences of RNA molecules.

Short-read RNA sequencing can give highly accurate sequencing data, with a low per-base error rate of approximately 0.1% (meaning one base is incorrectly determined for every 1,000 bases sequenced). Nevertheless, it is limited in the information that it can provide due to the short length of the sequencing reads. In many ways, short-read RNA sequencing is like breaking a large picture into many jigsaw pieces that are all the same shape and size and then trying to piece the picture back together.

Frore Systems Airjet Mini and Airjet Pro are active cooling chips that are just 2.8mm thick and quietly suck cool air in from the top of the chip before pushing it out the sides with the aim to replace traditional fan-based solutions in ultrabooks, or be integrated into VR headsets and smartphones for improved cooling.

Yesterday we saw that cameras could clean themselves with micro-vibrations, and it happens that processors can be cooled with vibrations too as the Airjet chips are comprised of tiny membranes that vibrate at ultrasonic frequencies to generate a flow of air that enters through inlet vents in the top and transformed into high-velocity pulsating jets exiting from one side of the chip.

Quantum-enhanced single-parameter estimation is an established capability, with non-classical probe states achieving precisions beyond what can be reached by the equivalent classical resources in photonic1,2,3, trapped-ion4,5, superconducting6 and atomic7,8 systems. This has paved the way for quantum enhancements in practical sensing applications, from gravitational wave detection9 to biological imaging10. For single-parameter estimation, entangled probe states are sufficient to reach the ultimate allowed precisions. However, for multi-parameter estimation, owing to the possible incompatibility of different observables, entangling resources are also required at the measurement stage. The ultimate attainable limits in quantum multi-parameter estimation are set by the Holevo Cramér–Rao bound (Holevo bound)11,12. In most practical scenarios, it is not feasible to reach the Holevo bound as this requires a collective measurement on infinitely many copies of the quantum state13,14,15,16 (see Methods for a rigorous definition of collective measurements). Nevertheless, it is important to develop techniques that will enable the Holevo bound to be approached, given that multi-parameter estimation is fundamentally connected to the uncertainty principle17 and has many physically motivated applications, including simultaneously estimating phase and phase diffusion18,19, quantum super-resolution20,21, estimating the components of a three-dimensional field22,23 and tracking chemical processes24. Furthermore, as we demonstrate, collective measurements offer an avenue to quantum-enhanced sensing even in the presence of large amounts of decoherence, unlike the use of entangled probe states25,26.

To date, collective measurements for quantum multi-parameter metrology have been demonstrated exclusively on optical systems27,28,29,30,31,32. Contemporary approaches to collective measurements on optical systems are limited in their scalability: that is, it is difficult to generalize present approaches to measuring many copies of a quantum state simultaneously. The limited gate set available can also make it harder to implement an arbitrary optimal measurement. Indeed, the collective measurements demonstrated so far have all been restricted to measuring two copies of the quantum state and, while quantum enhancement has been observed, have all failed to reach the ultimate theoretical limits on separable measurements33,34. Thus, there is a pressing need for a more versatile and scalable approach to implementing collective measurements.

In this work, we design and implement theoretically optimal collective measurement circuits on superconducting and trapped-ion platforms. The ease with which these devices can be reprogrammed, the universal gate set available and the number of modes across which entanglement can be generated, ensure that they avoid many of the issues that current optical systems suffer from. Using recently developed error mitigation techniques35 we estimate qubit rotations about the axes of the Bloch sphere with a greater precision than what is allowed by separable measurements on individual qubits. This approach allows us to investigate several interesting physical phenomena: we demonstrate both optimal single-and two-copy collective measurements reaching the theoretical limits33,34. We also implement a three-copy collective measurement as a first step towards surpassing two-copy measurements. However, due to the circuit complexity, this measurement performs worse than single-copy measurements. We investigate the connection between collective measurements and the uncertainty principle. Using two-copy collective measurements, we experimentally violate a metrological bound based on known, but restrictive uncertainty relations36. Finally, we compare the metrological performance of quantum processors from different platforms, providing an indication of how future quantum metrology networks may look.

It’s not at every university that laser pulses powerful enough to burn paper and skin are sent blazing down a hallway. But that’s what happened in UMD’s Energy Research Facility, an unremarkable looking building on the northeast corner of campus. If you visit the utilitarian white and gray hall now, it seems like any other university hall—as long as you don’t peak behind a cork board and spot the metal plate covering a hole in the wall.

But for a handful of nights in 2021, UMD Physics Professor Howard Milchberg and his colleagues transformed the hallway into a laboratory: The shiny surfaces of the doors and a water fountain were covered to avoid potentially blinding reflections; connecting hallways were blocked off with signs, caution tape and special -absorbing black curtains; and scientific equipment and cables inhabited normally open walking space.

As members of the team went about their work, a snapping sound warned of the dangerously powerful path the laser blazed down the hall. Sometimes the beam’s journey ended at a white ceramic block, filling the air with louder pops and a metallic tang. Each night, a researcher sat alone at a computer in the adjacent lab with a walkie-talkie and performed requested adjustments to the laser.

For the first time since it was proposed more than 80 years ago, scientists from Nanyang Technological University, Singapore (NTU Singapore) have demonstrated the phenomenon of “quantum recoil,” which describes how the particle nature of light has a major impact on electrons moving through materials. The research is published online today (January 19) in the journal Nature Photonics.

Making quantum recoil a practical reality should eventually allow businesses to more accurately produce X-rays of specific levels, leading to superior accuracy in healthcare and manufacturing applications such as and flaw detection in semiconductor chips.

Quantum recoil was theorized by Russian physicist and Nobel laureate Vitaly Ginzburg in 1940 to accurately account for radiation emitted when charged particles like electrons move through a medium, such as water, or materials with repeated patterns on the surface, including those on butterfly wings and graphite.

Researchers have developed an extremely thin chip with an integrated photonic circuit that could be used to exploit the so-called terahertz gap – lying between 0.3-30THz in the electromagnetic spectrum – for spectroscopy and imaging.

This gap is currently something of a technological dead zone, describing frequencies that are too fast for today’s electronics and telecommunications devices, but too slow for optics and imaging applications.

However, the scientists’ new chip now enables them to produce terahertz waves with tailored frequency, wavelength, amplitude and phase. Such precise control could enable terahertz radiation to be harnessed for next-generation applications in both the electronic and optical realms.

After the introduction of the fifth-generation technology standard for broadband cellular networks (5G), engineers worldwide are now working on systems that could further speed up communications. The next-generation wireless communication networks, from 6G onward, will require technologies that enable communications at sub-terahertz and terahertz frequency bands (i.e., from 100GHz to 10THz).

While several systems have been proposed for enabling at these frequency bands specifically for personal use and local area networks, some applications would benefit from longer communication distances. So far, generating high-power ultrabroadband signals that contain information and can travel long distances has been challenging.

Researchers at the NASA Jet Propulsion Laboratory (JPL), Northeastern University and the Air Force Research Laboratory (AFRL) have recently developed a system that could enable multi-gigabit-per-second (Gbps) communications in the sub-terahertz frequency band over several kilometers. This system, presented in a paper in Nature Electronics, utilizes on-chip power-combining frequency multiplier designs based on Schottky diodes, semiconducting diodes formed by the junction of a semiconductor and a metal, developed at NASA JPL.

Why the recent surge in jaw-dropping announcements? Why are neutral atoms seeming to leapfrog other qubit modalities? Keep reading to find out.

The table below highlights the companies working to make Quantum Computers using neutral atoms as qubits:

And as an added feature I am writing this post to be “entangled” with the posts of Brian Siegelwax, a respected colleague and quantum algorithm designer. My focus will be on the hardware and corporate details about the companies involved, while Brian’s focus will be on actual implementation of the platforms and what it is like to program on their devices. Unfortunately, most of the systems created by the companies noted in this post are not yet available (other than QuEra’s), so I will update this post along with the applicable hot links to Brian’s companion articles, as they become available.

True to Moore’s Law, the number of transistors on a microchip has doubled every year since the 1960s. But this trajectory is predicted to soon plateau because silicon—the backbone of modern transistors—loses its electrical properties once devices made from this material dip below a certain size.

Enter 2D materials—delicate, two-dimensional sheets of perfect crystals that are as thin as a . At the scale of nanometers, 2D materials can conduct electrons far more efficiently than silicon. The search for next-generation transistor materials therefore has focused on 2D materials as potential successors to silicon.

But before the can transition to 2D materials, scientists have to first find a way to engineer the materials on industry-standard while preserving their perfect crystalline form. And MIT engineers may now have a solution.