Toggle light / dark theme

Objects in space reveal different aspects of their composition and behavior at different wavelengths of light. Supernova remnant Cassiopeia A (Cas A) is one of the most well-studied objects in the Milky Way across the wavelength spectrum. However, there are still secrets hidden within the star’s tattered remains.

The latest are being unlocked by one of the newest tools in the researchers’ toolbox, the James Webb Space Telescope—and Webb’s recent look in the near-infrared has blown researchers away.

Like a shiny, round ornament ready to be placed in the perfect spot on a holiday tree, supernova remnant Cassiopeia A (Cas A) gleams in a new image from NASA’s James Webb Space Telescope.

How do we go from 100 to 200 to 1000? PASQAL, a quantum computing startup, is using LASERS. They’ve demonstrated 100 and 200 qubit systems, now they’re talking about making 1000. Here’s the mockup of their system.

———————-
Need POTATO merch? There’s a chip for that!
http://merch.techtechpotato.com.

http://more-moore.com : Sign up to the More Than Moore Newsletter.
https://www.patreon.com/TechTechPotato : Patreon gets you access to the TTP Discord server!

Follow Ian on Twitter at http://twitter.com/IanCutress.
Follow TechTechPotato on Twitter at http://twitter.com/TechTechPotato.

If you’re in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.
Amazon USA : https://geni.us/AmazonUS-TTP
Amazon UK : https://geni.us/AmazonUK-TTP
Amazon CAN : https://geni.us/AmazonCAN-TTP
Amazon GER : https://geni.us/AmazonDE-TTP
Amazon Other : https://geni.us/TTPAmazonOther.

Ending music: https://www.youtube.com/watch?v=2N0tmgau5E4

Building a plane while flying it isn’t typically a goal for most, but for a team of Harvard-led physicists that general idea might be a key to finally building large-scale quantum computers.

Described in a new paper in Nature, the research team, which includes collaborators from QuEra Computing, MIT, and the University of Innsbruck, developed a new approach for processing quantum information that allows them to dynamically change the layout of atoms in their system by moving and connecting them with each other in the midst of computation.

This ability to shuffle the qubits (the fundamental building blocks of quantum computers and the source of their massive processing power) during the computation process while preserving their quantum state dramatically expands processing capabilities and allows for self-correction of errors. Clearing this hurdle marks a major step toward building large-scale machines that leverage the bizarre characteristics of quantum mechanics and promise to bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields.

Natural language processing (NLP) has entered a transformational period with the introduction of Large Language Models (LLMs), like the GPT series, setting new performance standards for various linguistic tasks. Autoregressive pretraining, which teaches models to forecast the most likely tokens in a sequence, is one of the main factors causing this amazing achievement. Because of this fundamental technique, the models can absorb a complex interaction between syntax and semantics, contributing to their exceptional ability to understand language like a person. Autoregressive pretraining has substantially contributed to computer vision in addition to NLP.

In computer vision, autoregressive pretraining was initially successful, but subsequent developments have shown a sharp paradigm change in favor of BERT-style pretraining. This shift is noteworthy, especially in light of the first results from iGPT, which showed that autoregressive and BERT-style pretraining performed similarly across various tasks. However, because of its greater effectiveness in visual representation learning, subsequent research has come to prefer BERT-style pretraining. For instance, MAE shows that a scalable approach to visual representation learning may be as simple as predicting the values of randomly masked pixels.

In this work, the Johns Hopkins University and UC Santa Cruz research team reexamined iGPT and questioned whether autoregressive pretraining can produce highly proficient vision learners, particularly when applied widely. Two important changes are incorporated into their process. First, the research team “tokenizes” photos into semantic tokens using BEiT, considering images are naturally noisy and redundant. This modification shifts the focus of the autoregressive prediction from pixels to semantic tokens, allowing for a more sophisticated comprehension of the interactions between various picture areas. Secondly, the research team adds a discriminative decoder to the generative decoder, which autoregressively predicts the subsequent semantic token.