Toggle light / dark theme

JILA breakthrough in integrating artificial atoms with photonic circuits advances quantum computing efficiency and scalability.

In quantum information science, many particles can act as “bits,” from individual atoms to photons. At JILA, researchers utilize these bits as “qubits,” storing and processing quantum 1s or 0s through a unique system.

While many JILA Fellows focus on qubits found in nature, such as atoms and ions, JILA Associate Fellow and University of Colorado Boulder Assistant Professor of Physics Shuo Sun is taking a different approach by using “artificial atoms,” or semiconducting nanocrystals with unique electronic properties. By exploiting the atomic dynamics inside fabricated diamond crystals, physicists like Sun can produce a new type of qubit, known as a “solid-state qubit,” or an artificial atom.

With coral reefs under attack from ongoing climate change effects, what steps can be taken to reverse the damage? This is what a recent study published in iScience hopes to address as a team of international researchers investigated how to monitor coral reef health that is impacted through climate change, specifically with altering biomineralization, which is the driving force behind coral reef formation. This study holds the potential to help scientists better understand how climate change impacts coral reef health and potential steps to improve conservation of corals throughout the world.

“The whole ecosystem is dying. You can listen to the death all you want, but what are you going to do to fix it?” said Dr. Mark Martindale, who is the director of the University of Florida’s Whitney Laboratory for Marine Bioscience and a co-author on the study. “In order to do that, you need to understand what the problems are. And you need an experimental system to do that. Now we have that system.”

Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to health care. But these networks remain a black box whose inner workings engineers and scientists struggle to understand.

How fast did the first galaxies and stars form after the Big Bang? This is what a recent study published in Nature Astronomy hopes to address as an international team of scientists led by the University of Melbourne used NASA’s James Webb Space Telescope (JWST) to observe the merger of two galaxies that occurred approximately 510 million years after the Big Bang, or approximately 13 billion years ago. This study holds the potential to help astronomers better understand the processes behind galaxy formation and evolution during the universe’s youth.

“It is amazing to see the power of JWST to provide a detailed view of galaxies at the edge of the observable Universe and therefore back in time” said Dr. Michele Trenti, who is a Professor and Cosmologist in the School of Physics at the University of Melbourne and a co-author on the study. “This space observatory is transforming our understanding of early galaxy formation.”

For the study, the researchers used JWST’s powerful infrared instruments to observe what they hypothesize to be two merging galaxies comprised of a primary clump and a long tail with a mass equivalent to approximately 1.6 × 109 masses of our Sun that contains approximately 10 percent of the metals of our Sun and growing by approximately 19 solar masses per year. Additionally, they estimate the stars within these merging galaxies are less than 10 million years old within the main clump of the merger and stars in the outer regions to be approximately 120 million years old.

The covert stereotypes also strengthened as the size of the models increased, researchers found. That finding offers a potential warning to chatbot makers like OpenAI, Meta, and Google as they race to release larger and larger models. Models generally get more powerful and expressive as the amount of their training data and the number of their parameters increase, but if this worsens covert racial bias, companies will need to develop better tools to fight it. It’s not yet clear whether adding more AAE to training data or making feedback efforts more robust will be enough.

“This is revealing the extent to which companies are playing whack-a-mole—just trying to hit the next bias that the most recent reporter or paper covered,” says Pratyusha Ria Kalluri, a PhD candidate at Stanford and a coauthor on the study. “Covert biases really challenge that as a reasonable approach.”

The paper’s authors use particularly extreme examples to illustrate the potential implications of racial bias, like asking AI to decide whether a defendant should be sentenced to death. But, Ghosh notes, the questionable use of AI models to help make critical decisions is not science fiction. It happens today.

Investors know that most startups fail, but something that may be less understood is how few mobile apps actually make money. According to a new analysis of the subscription app economy from mobile subscription toolkit provider RevenueCat, the top 5% of apps generate 200 times the revenue of the bottom quartile after their first year, while the median monthly revenue an app generates after 12 months is less than $50 USD.

The “State of Subscription Apps” report offers a bird’s-eye view into the subscription app universe, as RevenueCat has nearly 30,000 apps using its platform’s tools to manage their monetization. Outside of Apple and Google, that makes RevenueCat the largest collection of subscription app developers on one platform.

This report specifically looks at data from over 29,000 apps and over 18,000 developers who collectively generate over $6.7 billion in tracked revenue and have over 290 million subscribers.