The microscope cost less than £50 to build using an open-source design and a common 3D printer.
New observations reveal the challenges of detecting planetary atmospheres.
Recent measurements with the James Webb Space Telescope (JWST) cast doubt on the current understanding of the exoplanet Trappist-1 b’s nature. Until now, it was assumed to be a dark rocky planet without an atmosphere, shaped by a billion-year-long cosmic impact of radiation and meteorites. The opposite appears to be true. The surface shows no signs of weathering, which could indicate geological activity such as volcanism and plate tectonics. Alternatively, a planet with a hazy atmosphere composed of carbon dioxide is also viable. The results demonstrate the challenges of determining the properties of exoplanets with thin atmospheres.
Trappist-1 b is one of seven rocky planets orbiting the star Trappist-1, located 40 light-years away. The planetary system is unique because it allows astronomers to study seven Earth-like planets from relatively close range, with three of them in the so-called habitable zone. This is the area in a planetary system where a planet could have liquid water on the surface. To date, ten research programmes have targeted this system with the James Webb Space Telescope (JWST) for 290 hours.
The mention of gravity and quantum in the same sentence often elicits discomfort from theoretical physicists, yet the effects of gravity on quantum information systems cannot be ignored. In a recently announced collaboration between the University of Connecticut, Google Quantum AI, and the Nordic Institute for Theoretical Physics (NORDITA), researchers explored the interplay of these two domains, quantifying the nontrivial effects of gravity on transmon qubits.
Led by Alexander Balatsky of UConn’s Quantum Initiative, along with Google’s Pedram Roushan and NORDITA researchers Patrick Wong and Joris Schaltegger, the study focuses on the gravitational redshift. This phenomenon slightly detunes the energy levels of qubits based on their position in a gravitational field. While negligible for a single qubit, this effect becomes measurable when scaled.
While quantum computers can effectively be protected from electromagnetic radiation, barring any innovative antigravitic devices expansive enough to hold a quantum computer, quantum technology cannot at this point in time be shielded from the effects of gravity. The team demonstrated that gravitational interactions create a universal dephasing channel, disrupting the coherence required for quantum operations. However, these same interactions could also be used to develop highly sensitive gravitational sensors.
“Our research reveals that the same finely tuned qubits engineered to process information can serve as precise sensors—so sensitive, in fact, that future quantum chips may double as practical gravity sensors. This approach is opening a new frontier in quantum technology.”
To explore these effects, the researchers modeled the gravitational redshift’s impact on energy-level splitting in transmon qubits. Gravitational redshift, a phenomenon predicted by Einstein’s general theory of relativity, occurs when light or electromagnetic waves traveling away from a massive object lose energy and shift to longer wavelengths. This happens because gravity alters the flow of time, causing clocks closer to a massive object to tick more slowly than those farther away.
Quantum computing is getting a lot of attention lately — deservedly so. It’s hard not to get excited about the new capabilities that quantum computing could bring. This new generation of computers will solve extremely complex problems by sorting through billions upon billions of wrong answers to arrive at the correct solutions. We could put these capabilities to work designing new medications or optimizing global infrastructure on an enormous scale.
But in the excitement surrounding quantum computing, what often gets lost is that computing is just one element of the larger quantum technologies story. We are entering a new quantum era in which we are learning to manipulate and control the quantum states of matter down to the level of individual particles. This has unlocked a wealth of new possibilities across multiple fields. For instance, by entangling two photons of light, we can generate a communications channel that is impervious to eavesdropping. Or we can put the highly sensitive nature of quantum particles to work detecting phenomena we have never been able to sense before.
We call this new era of innovation Quantum 2.0, distinguishing it from the Quantum 1.0 era of the last 100 years. Quantum 1.0 gave us some of the most remarkable inventions of the 20th century, from the transistor to the laser. But as we transition to Quantum 2.0, we are reconceptualizing the way we communicate and the way we sense the world, as well as the way we compute. What’s more, we’re only just beginning to realize Quantum 2.0’s full potential.
Chinese researchers say that recent advancements in the burgeoning field of inertial confinement fusion are bringing us one step closer to making accessible nuclear fusion a reality.
The new findings, which incorporate innovative new modeling approaches, could open new avenues for the exploration of the mysteries surrounding high-energy-density physics, and could potentially offer a window toward understanding the physics of the early universe.
Harnessing controlled nuclear fusion as a potential source of clean energy has seen several significant advancements in recent years, and the recent research by a Chinese team, funded by the Strategic Priority Research Program of Chinese Academy of Sciences and published in Science Bulletin last month, signals the next wave of insights with what the team calls a “surprising observation” involving supra-thermal ions during observations of fusion burning plasmas at National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in California.
Researchers led by Nanyang Technological University, Singapore (NTU Singapore) have developed a breakthrough technique that could lay the foundations for detecting the universe’s “dark matter” and bring scientists closer than before to uncovering the secrets of the cosmos.
The things we can see on Earth and in space— visible matter like rocks and stars—make up only a small portion of the universe, as scientists believe that 85% of matter in the cosmos comprises invisible dark matter. This mysterious substance is said to be the invisible glue holding galaxies together. Finding it could help us understand cosmic phenomena that cannot be explained solely by the matter we see.
But proving the existence of dark matter is a herculean task. As its name suggests, dark matter is “dark,” meaning it does not normally emit or reflect light, carries no electric charge and interacts extremely weakly with normal matter, making it undetectable with conventional scientific instruments.
Imagine this scenario: Two people cheat on their partners with each other and then leave their partners to be together. Should they trust each other, or “once a cheater, always a cheater”?
Intuition and past research suggest that whether people deem someone trustworthy depends on that person’s past behavior and reputation for betrayal. But now, new work from psychologists at UCLA and Oklahoma State University is helping to explain why people might nevertheless trust certain cheaters and other betrayers.
When we benefit from someone’s betrayal, we tend to still regard that person as inherently trustworthy, the psychologists reported in a study published in Evolution and Human Behavior. Their experiments found that although subjects tended to regard people who betrayed others as generally less trustworthy, when a person’s betrayal benefited the subject, that person was still thought to be worthy of trust.
The aurora borealis, or northern lights, is known for a stunning spectacle of light in the night sky, but this near-Earth manifestation, which is caused by explosive activity on the sun and carried by the solar wind, can also interrupt vital communications and security infrastructure on Earth. Using artificial intelligence, researchers at the University of New Hampshire have categorized and labeled the largest-ever database of aurora images that could help scientists better understand and forecast the disruptive geomagnetic storms.
The research, recently published in the Journal of Geophysical Research: Machine Learning and Computation, developed artificial intelligence and machine learning tools that were able to successfully identify and classify over 706 million images of auroral phenomena in NASA’s Time History of Events and Macroscale Interactions during Substorms (THEMIS) data set collected by twin spacecrafts studying the space environment around Earth. THEMIS provides images of the night sky every three seconds from sunset to sunrise from 23 different stations across North America.
“The massive dataset is a valuable resource that can help researchers understand how the solar wind interacts with the Earth’s magnetosphere, the protective bubble that shields us from charged particles streaming from the sun,” said Jeremiah Johnson, associate professor of applied engineering and sciences and the study’s lead author. “But until now, its huge size limited how effectively we can use that data.”
An international research team led by the University of California, Irvine has discovered a new type of skeletal tissue that offers great potential for advancing regenerative medicine and tissue engineering.
Most cartilage relies on an external extracellular matrix for strength, but “lipocartilage,” which is found in the ears, nose and throat of mammals, is uniquely packed with fat-filled cells called “lipochondrocytes” that provide super-stable internal support, enabling the tissue to remain soft and springy—similar to bubbled packaging material.
The study, published in the journal Science, describes how lipocartilage cells create and maintain their own lipid reservoirs, remaining constant in size. Unlike ordinary adipocyte fat cells, lipochondrocytes never shrink or expand in response to food availability.