Toggle light / dark theme

Not everyone needs 8 hours of sleep to function properly. Some people can feel well-rested and show no negative effects of sleep deprivation, even after just 4 hours of sleep, which is likely the result of a genetic mutation.

A recent study has reported that a mutation in salt-induced kinase 3 (hSIK3-N783Y)—a gene critical for regulating sleep duration and depth—may be the reason why some people are natural short sleepers (NSS).

The findings of this study are published in Proceedings of the National Academy of Sciences.

As far back as we can observe in our Universe, time always behaved in exactly the same fashion we’re familiar with: ticking away, relentlessly, at the same rate for all observers. Bring your clock to the surface of the Earth? The bottom of the ocean? Into orbit in space? Near the event horizon of a black hole? Or speeding through intergalactic space at close to the speed of light? It doesn’t matter. The amount of time it takes for regular events to occur — for a second to tick by, for an atomic transition to occur, for a photon of a specific wavelength to have one “wave” pass by you, etc. — is going to be identical for any observer under any of those conditions. In fact, the rate at which time passes for themselves, at one second-per-second, is something all observers can agree on.

Sure, relativity is weird in a lot of ways, both when you move close to the speed of light or when the curvature of spacetime is very strong. Lengths contract, time durations dilate, and different observers draw different conclusions for one another versus for themselves. But time still passes, and relativity allows us to reconcile those differences. But what about if we go to an unfamiliar place; what if we consider what happens before the Big Bang? That’s what Justin Skit wants to know, asking:

“Can you help me understand what’s going on with time during cosmic inflation? I know inflation starts and then the big bang. But if the era before the big bang was timeless how does that work?”

Valve founder Gabe Newell’s neural chip company Starfish Neuroscience announced it’s developing a custom chip designed for next-generation, minimally invasive brain-computer interfaces—and it may be coming sooner than you think.

The company announced in a blog update that it’s creating a custom, ultra-low power neural chip in collaboration with R&D leader imec.

Starfish says the chip is intended for future wireless, battery-free brain implants capable of reading and stimulating neural activity in multiple areas simultaneously—a key requirement for treating complex neurological disorders involving circuit-level dysfunction. That’s the ‘read and write’ functions we’ve heard Newell speak about in previous talks on the subject.

Gabe Newell, co-founder of Valve, sat down with IGN for a chat about the company, the promise of VR, and Newell’s most bleeding edge project as of late, brain-computer interfaces (BCI).

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink —which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Thus, a complete understanding of quantum transport requires the ability to simulate and probe macroscopic and microscopic physics on equal footing.

Researchers from Singapore and China have utilized a superconducting quantum processor to examine the phenomenon of quantum transport in unprecedented detail.

Gaining deeper insights into quantum transport—encompassing the flow of particles, magnetization, energy, and information through quantum channels—has the potential to drive significant innovations in next-generation technologies such as nanoelectronics and thermal management.

Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.

METHODS:

We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.