Toggle light / dark theme

However, if long and thin strips of graphene (termed ) are cut out of a wide graphene sheet, the quantum become confined within the narrow dimension, which makes them semi-conducting and enables their use in quantum switching devices. As of today, there are a number of barriers to using graphene nanoribbons in devices, among them is the challenge of reproducibly growing narrow and long sheets that are isolated from the environment.

In this new study, the researchers were able to develop a method to catalytically grow narrow, long, and reproducible graphene nanoribbons directly within insulating hexagonal boron-nitride stacks, as well as demonstrate peak performance in quantum switching devices based on the newly-grown ribbons. The unique growth mechanism was revealed using advanced molecular dynamics simulation tools that were developed and implemented by the Israeli teams.

These calculations showed that ultra-low friction in certain growth directions within the boron-nitride crystal dictates the reproducibility of the structure of the ribbon, allowing it to grow to unprecedented lengths directly within a clean and isolated environment.

Theory of mind —the ability to understand other people’s mental states—is what makes the social world of humans go around. It’s what helps you decide what to say in a tense situation, guess what drivers in other cars are about to do, and empathize with a character in a movie. And according to a new study, the large language models (LLM) that power ChatGPT and the like are surprisingly good at mimicking this quintessentially human trait.

“Before running the study, we were all convinced that large language models would not pass these tests, especially tests that evaluate subtle abilities to evaluate mental states,” says study coauthor Cristina Becchio, a professor of cognitive neuroscience at the University Medical Center Hamburg-Eppendorf in Germany. The results, which she calls “unexpected and surprising,” were published today —somewhat ironically, in the journal Nature Human Behavior.

The results don’t have everyone convinced that we’ve entered a new era of machines that think like we do, however. Two experts who reviewed the findings advised taking them “with a grain of salt” and cautioned about drawing conclusions on a topic that can create “hype and panic in the public.” Another outside expert warned of the dangers of anthropomorphizing software programs.

Geoffrey Hinton, one of the “godfathers” of AI, is adamant that AI will surpass human intelligence — and worries that we aren’t being safe enough about its development.

This isn’t just his opinion, though it certainly carries weight on its own. In an interview with the BBC’s Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field.

“Very few of the experts are in doubt about that,” Hinton told the BBC. “Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it’s just a question of when.”

Aaron Vick is a multi-x founder, former CEO, best-selling author, process and workflow nerd and early-stage/growth advisor focused on Web3.

The age of artificial intelligence (AI) is transforming the landscape of creativity, challenging our understanding of creator rights and digital identity. As AI becomes an integral part of the creative process, collaborating with human minds to push the boundaries of imagination and innovation, we find ourselves in a new era that demands reevaluating the essence of authorship.

This AI renaissance is not just about the tools we use to create; it is about the fundamental shift in how we perceive and value creativity. In a world where AI can generate art, music and literature that rivals the works of human creators, we must reconsider what it means to be an author, an artist or a creator. The lines between human and machine creativity are blurring, giving rise to new forms of expression and collaboration that were once unimaginable.

“Neuralink to implant 2nd human with brain chip as 85% of threads retract. Neuralink’s first patient, 29-year-old Noland Arbaugh, opened up about the roller-coaster experience. ” I was on such a high and then to be brought down that low. It was very, very hard,” Arbaugh said. ” I cried.” What a disaster!


Algorithm tweaks made up for the loss, and Neuralink thinks it has fix for next patient.

At Microsoft’s AI press event, the company unveiled its latest Surface PCs with new AI Copilot features built-in. Check out all the highlights in our recap from Redmond, WA.

Everything Microsoft Just Announced: Copilot Plus PCs, Surface Pro and Laptop Running on Qualcomm https://bit.ly/3ynj8BQ

0:00 Intro.
1:05 Copilot+PC
2:30 Microsoft Copilot Update.
4:14 Microsoft Copilot with Minecraft.
6:33 Copilot+PC NPU
7:29 Copilot+PC Qualcomm Snapdragon X ELite.
8:10 Copilot+PC Surface Laptop and Surface Pro.
8:50 Copilot+PC Surface Laptop Specs.
10:54 Copilot+PC Surface Pro Specs.
12:01 Surface Pro Flex Keyboard.
12:35 Surface Slim Pin.
12:50 Copilot+PC Preorders and Availability.

Never miss a deal again! See CNET’s browser extension 👉 https://bit.ly/3lO7sOU

How do you define consciousness?


Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project —a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.