We’ve seen a lot of different types of VTOL in recent times but this one is totally different for an important technical reason.

An international team of astronomers, led by The University of Texas at Austin’s Cosmic Frontier Center, has identified the most distant black hole ever confirmed. It and the galaxy it calls home, CAPERS-LRD-z9, are present 500 million years after the Big Bang. That places it 13.3 billion years into the past, when our universe was just 3% of its current age. As such, it provides a unique opportunity to study the structure and evolution of this enigmatic period.
“When looking for black holes, this is about as far back as you can practically go. We’re really pushing the boundaries of what current technology can detect,” said Anthony Taylor, a postdoctoral researcher at the Cosmic Frontier Center and lead on the team that made the discovery.
The research is published in The Astrophysical Journal.
By conducting multiwavelength observations with various telescopes and space observatories, astronomers from Tsinghua University and Steward Observatory have detected a galaxy pair exhibiting significant X-ray emission. The finding was reported in a research paper published July 31 on the pre-print server arXiv.
The Great Observatories Origins Deep Survey (GOODS) is a deep-sky survey conducted by multiple observatories to study the formation and evolution of galaxies. It combines multiwavelength data from space observatories like the Hubble Space Telescope (HST), Chandra X-ray Observatory, Spitzer spacecraft, XMM-Newton satellite, and the largest ground-based facilities, such as the Very Large Telescope (VLT), Keck telescopes, Gemini Observatory or the Very Large Array (VLA).
Recently, a team of astronomers led by Tsinghua University’s Sijia Cai conducted a search for Chandra X-ray detected star-forming galaxies in the Southern field of the GOODS survey (GOODS-S). For this purpose, they combined observations from VLA and the Atacama Large Millimeter/submillimeter Array (ALMA), spectroscopic data from the James Webb Space Telescope (JWST) and VLT, as well as photometry from HST and JWST.
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still trying to figure out how its “personality traits” arise and how to control them. Large learning models (LLMs) use chatbots or “assistants” to interface with users, and some of these assistants have exhibited troubling behaviors recently, like praising evil dictators, using blackmail or displaying sycophantic behaviors with users. Considering how much these LLMs have already been integrated into our society, it is no surprise that researchers are trying to find ways to weed out undesirable behaviors.
Anthropic, the AI company and creator of the LLM Claude, recently released a paper on the arXiv preprint server discussing their new approach to reining in these undesirable traits in LLMs. In their method, they identify patterns of activity within an AI model’s neural network—referred to as “persona vectors”—that control its character traits. Anthropic says these persona vectors are somewhat analogous to parts of the brain that “light up” when a person experiences a certain feeling or does a particular activity.
Anthropic’s researchers used two open-source LLMs, Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct, to test whether they could remove or manipulate these persona vectors to control the behaviors of the LLMs. Their study focuses on three traits: evil, sycophancy and hallucination (the LLM’s propensity to make up information). Traits must be given a name and an explicit description for the vectors to be properly identified.
A new breakthrough from the Zhang Lab at Boston University is making waves in the world of sound control.
Led by Professor Xin Zhang (ME, ECE, BME, MSE), the team has published a new paper in Scientific Reports titled “Phase gradient ultra open metamaterials for broadband acoustic silencing.”
The article marks a major advance in their long-running Acoustic Metamaterial Silencer project.
Analysis of ocean sediments has surfaced geochemical clues in line with the possibility that an encounter with a disintegrating comet 12,800 years ago in the Northern Hemisphere triggered rapid cooling of Earth’s air and ocean. Christopher Moore of the University of South Carolina, U.S., and colleagues present these findings in the journal PLOS One on August 6, 2025.
During the abrupt cool-off—the Younger Dryas event—temperatures dropped about 10 degrees Celsius in a year or less, with cooler temperatures lasting about 1,200 years. Many researchers believe that no comet was involved, and that glacial meltwater caused freshening of the Atlantic Ocean, significantly weakening currents that transport warm, tropical water northward.
In contrast, the Younger Dryas Impact Hypothesis posits that Earth passed through debris from a disintegrating comet, with numerous impacts and shockwaves destabilizing ice sheets and causing massive meltwater flooding that shut down key ocean currents.
Imagine trying to make an accurate three-dimensional model of a building using only pictures taken from different angles—but you’re not sure where or how far away all the cameras were. Our big human brains can fill in a lot of those details, but computers have a much harder time doing so.
This scenario is a well-known problem in computer vision and robot navigation systems. Robots, for instance, must take in lots of 2D information and make 3D point clouds —collections of data points in 3D space—in order to interpret a scene. But the mathematics involved in this process is challenging and error-prone, with many ways for the computer to incorrectly estimate distances. It’s also slow, because it forces the computer to create its 3D point cloud bit by bit.
Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) think they have a better method: A breakthrough algorithm that lets computers reconstruct high-quality 3D scenes from 2D images much more quickly than existing methods.
We all remember the advice frequently repeated during the COVID pandemic: maintain six feet of distance from every other human when waiting in a line to avoid transmitting the virus. While reasonable, the advice did not take into account the complicated fluid dynamics governing how the airborne particles actually travel through the air if people are also walking and stopping. Now, a team of researchers led by two undergraduate physics majors at the University of Massachusetts Amherst has modeled how aerosol plumes spread when people are waiting and walking in a line.
The results, published recently in Science Advances, grew out of a question that many of us may have asked ourselves when standing in marked locations six-feet apart while waiting for a vaccine, to pay for groceries or to get a cup of coffee: what’s the science behind six-feet of separation? If you are a physicist, you might even have asked yourself, “What is happening physically to the aerosol plumes we’re all breathing out while waiting in a line, and is the six-foot guideline the best way to design a queue?”
To find answers to these questions, two UMass Amherst undergrads, Ruixi Lou and Milo Van Mooy, took the lead.
Researchers at UC Santa Barbara, The University of Texas at Austin, Yale University and National Taiwan Normal University have found that a fair number of sun-like stars emerge with their rotational axis tilted with respect to their protoplanetary disks, the clouds of gas and dust from which solar systems are born.
“All young stars have these disks, but we’ve known little about their orientations with respect to the spin axis of the host stars,” said UCSB associate physics professor Brendan Bowler, who studies how planets form and evolve through their orbits and atmospheres, and is senior author of a study in the journal Nature. Based on the general alignment of our own sun’s rotational axis with those of the planets in our solar system, the assumption was that stars and their planet-forming disks emerge and rotate in or very close to alignment, he explained.
“This work challenges these centuries-old assumptions,” Bowler said.
Researchers from the Broad Institute and Mass General Brigham have shown that a low-oxygen environment—similar to the thin air found at Mount Everest base camp—can protect the brain and restore movement in mice with Parkinson’s-like disease.
The new research, in Nature Neuroscience, suggests that cellular dysfunction in Parkinson’s leads to the accumulation of excess oxygen molecules in the brain, which then fuel neurodegeneration—and that reducing oxygen intake could help prevent or even reverse Parkinson’s symptoms.
“The fact that we actually saw some reversal of neurological damage is really exciting,” said co-senior author Vamsi Mootha, an institute member at the Broad, professor of systems biology and medicine at Harvard Medical School, and a Howard Hughes Medical Institute investigator in the Department of Molecular Biology at Massachusetts General Hospital (MGH), a founding member of the Mass General Brigham healthcare system.