Toggle light / dark theme

Inserting any material into a special maze of mirrors and lenses can make it absorb light perfectly. This approach could be used to detect faint starlight or for charging faraway devices with lasers.

Ori Katz at the Hebrew University of Jerusalem in Israel and his colleagues created an almost perfect absorber of light by building an “anti-laser”.

In a laser, light bounces between mirrors until it becomes amplified enough to exit the device in a concentrated beam. In an “anti-laser”, says co-author Stefan Rotter at Vienna University of Technology in Austria, light enters the device then gets stuck in an inescapable series of bounces within it.

Ancient engineers might have built a canal on the Nile.

No one has solved the mystery of the Giza pyramids for centuries. Although archaeologists and scientists have tried to reveal how they were made over the years, it is difficult to say the “exact method” for sure. However, very recently, an idea has been put forward by researchers about how the pyramids were built.

According to a recent study — published in PNAS in August. 29 —the pyramids of Giza may have been built using a former arm of the Nile River. This river branch would have served as a navigable route for the transportation of goods not previously known.


The pyramids of Giza constitute one of the world’s most iconic cultural landscapes and have fascinated humanity for thousands of years. Indeed, the Great Pyramid of Giza (Khufu Pyramid) was one of the Seven Wonders of the Ancient World. It is now accepted that ancient Egyptian engineers exploited a former channel of the Nile to transport building materials and provisions to the Giza plateau. However, there is a paucity of environmental evidence regarding when, where, and how these ancient landscapes evolved. New palaeoecological analyses have helped to reconstruct an 8,000-year fluvial history of the Nile in this area, showing that the former waterscapes and higher river levels around 4,500 years ago facilitated the construction of the Giza Pyramid Complex.

Our innovative ocean cleaning system is already removing plastic from the Pacific Ocean. Combined with our Interceptor river solutions deployed around the world, we aim to reduce floating ocean plastic by 90% by 2040.

Trillions of pieces of plastic float on the surface of our oceans, damaging habitats and contaminating food chains; a problem forecast to worsen exponentially as the stream of plastic flowing into the ocean from rivers increases. We address the plastic problem with a dual strategy: removing plastic that is already polluting the oceans, while also intercepting plastic in rivers to prevent it reaching the ocean and adding to the problem.

Throughout 2021 and 2022, our ocean cleaning system has been harvesting plastic from the Great Pacific Garbage Patch (GPGP), estimated to contain around 100,000,000 kilograms of plastic. Each branch of this strategy is essential to efficiently rid the oceans of plastic.

Using electrocorticography (ECoG), we probed the neurocognitive substrates of mentalizing at the level of neuronal populations. We found that mentalizing about the self and others recruited near-identical cortical sites (Fig. 5a, b) in a common spatiotemporal sequence (Figs. 5 c and 6). Within our ROIs, activations began in visual cortex, followed by temporoparietal DMN regions (TPJ, ATL, and PMC), and lastly in mPFC regions (amPFC, dmPFC, and vmPFC; Fig. 3e, f). Critically, regions with later activations exhibited greater functional specialization for mentalizing as measured by three metrics: functional specificity for mentalizing versus arithmetic (Figs. 3 c, d and 4b), self/other differentiation in activation timing (Fig. 5c, d), and temporal associations with behavioral responses (Fig. 4D and Table 1). Taken together, these results reveal a common neurocognitive sequence28,29,30,31 for self-and other-mentalizing, beginning in visual cortex (low specialization), ascending through temporoparietal DMN areas (intermediate specialization), then reaching its apex in mPFC regions (high specialization).

Our results are consistent with gradient-based models of brain function, which posit that concrete sensorimotor processing in unimodal regions (e.g., visual cortex) gradually yields to increasingly abstract and inferential processing in ‘high-level’ transmodal regions like mPFC41,42. We found that the strength of self/other differences in activation timing increased along a gradient from visual cortex to mPFC. Specifically, other-mentalizing evoked slower (Fig. 5c) and lengthier (Fig. 5d) activations than self-mentalizing throughout successive DMN ROIs. These self/other functional differences corresponded with self/other differences in RTBehav (Supplementary Fig. 4), although the two were often dissociable (Table 1). Thus, perhaps because we know ourselves better than others, other-mentalizing may require lengthier processing at more abstract and inferential levels of representation, ultimately resulting in slower behavioral responses.

What might our results imply about extant fMRI findings? Hundreds of fMRI studies consistently suggest that: TPJ and dmPFC are most crucial for mentalizing6,8,11,12,43,44,45,46, and dmPFC is selective for thinking about others over oneself 32,33,34,35. However, when examined with ECoG, we found that both pieces of received wisdom are not what they seem. Below, we discuss both issues before moving onto our peculiar vmPFC results, and then conclude with systems-level discussion.

WEST LAFAYETTE, Ind. — When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper published in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.