Toggle light / dark theme

Researchers have published the demonstration of a fully-integrated single-chip microwave photonics system, combining optical and microwave signal processing on a single silicon chip.

The chip integrates high-speed modulators, optical filters, photodetectors, as well as transfer-printed lasers, making it a compact, self-contained and programmable solution for high-frequency .

This breakthrough can replace bulky and power-hungry components, enabling faster wireless networks, low-cost microwave sensing, and scalable deployment in applications like 5G/6G, , and .

Cybersecurity researchers have flagged several popular Google Chrome extensions that have been found to transmit data in HTTP and hard-code secrets in their code, exposing users to privacy and security risks.

“Several widely used extensions […] unintentionally transmit sensitive data over simple HTTP,” Yuanjing Guo, a security researcher in the Symantec’s Security Technology and Response team, said. “By doing so, they expose browsing domains, machine IDs, operating system details, usage analytics, and even uninstall information, in plaintext.”

The fact that the network traffic is unencrypted also means that they are susceptible to adversary-in-the-middle (AitM) attacks, allowing malicious actors on the same network such as a public Wi-Fi to intercept and, even worse, modify this data, which could lead to far more serious consequences.

“Scientists have shown that there is ultra-weak photon emission in the brain, but no one understands why the light is there.”

If light is at play and scientists can understand why, it could have major implications for medically treating brain diseases and drastically change the way physicians heal the brain. But measuring optical transport between neurons would be no easy task.

Our brain and nerves rely on incredibly fast electrical signals to communicate, a process long understood to involve tiny bursts of electricity called action potentials that travel along nerve fibers. But scientists are now exploring whether something else might also be part of this picture: light.

Yes—light, or more specifically, photons. Some researchers have suggested that nerves might not only use electrical impulses but could also send signals using photons, the same particles that make up visible light. This idea is based on the possibility that the fatty coating around nerves, called the myelin sheath, could act like an optical fiber—just like the cables used to carry internet signals using light.

In earlier work, the researchers behind this new study proposed that light might actually be generated in specific parts of the nerve called nodes of Ranvier, which are tiny gaps in the myelin sheath that help boost the electrical signal. Now, they’ve gone a step further: using a special photographic technique involving silver ions, they’ve found physical evidence of photons being emitted from these nodes during nerve activity.

Their experiments suggest that, alongside the familiar electrical signals, nerves might also be emitting light when they fire—shining a new light, literally and figuratively, on how our nervous system might work.


What if accessing knowledge, which used to require hours of analyzing handwritten scrolls or books, could be done in mere moments?

Throughout history, the way humans acquire knowledge has experienced great revolutions. The birth of writing and books altered learning, allowing ideas to be preserved and shared across generations. Then came the Internet, connecting billions of people to vast information at their fingertips.

Today, we stand at another shift: the age of AI tools, where AI doesn’t just give us answers—it provides reliable, tailored responses in seconds. We no longer need to gather and evaluate the correct information for our problems. If knowledge is now a tool everyone can hold, the real revolution starts when we use this superpower to solve problems and improve the world.

At the heart of this breakthrough – driven by Japan’s National Institute of Information and Communications Technology (NICT) and Sumitomo Electric Industries – is a 19-core optical fiber with a standard 0.125 mm cladding diameter, designed to fit seamlessly into existing infrastructure and eliminate the need for costly upgrades.

Each core acts as an independent data channel, collectively forming a “19-lane highway” within the same space as traditional single-core fibers.

Unlike earlier multi-core designs limited to short distances or specialized wavelength bands, this fiber operates efficiently across the C and L bands (commercial standards used globally) thanks to a refined core arrangement that slashes signal loss by 40% compared to prior models.

Back in 2018, a scientist from the University of Texas at Austin proposed a protocol to generate randomness in a way that could be certified as truly unpredictable. That scientist, Scott Aaronson, now sees that idea become a working reality. “When I first proposed my certified randomness protocol in 2018, I had no idea how long I’d need to wait to see an experimental demonstration of it,” said Aaronson, who now directs a quantum center at a major university.

The experiment was carried out on a cutting-edge 56-qubit quantum computer, accessed remotely over the internet. The machine belongs to a company that recently made a significant upgrade to its system. The research team included experts from a large bank’s tech lab, national research centers, and universities.

To generate certified randomness, the team used a method called random circuit sampling, or RCS. The idea is to feed the quantum computer a series of tough problems, known as challenge circuits. The computer must solve them by choosing among many possible outcomes in a way that’s impossible to predict. Then, classical supercomputers step in to confirm whether the answers are genuinely random or not.

HELSINKI — Chinese commercial satellite manufacturer MinoSpace has won a major contract to build a remote sensing satellite constellation for Sichuan Province, under a project approved by the country’s top economic planner.

Beijing-based MinoSpace won the bid for the construction of a “space satellite constellation,” the National Public Resources Trading Platform (Sichuan Province) announced May 18, Chinese language Economic Observer reported.

The contract is worth 804 million yuan (around $111 million) and the constellation has been approved by the National Development and Reform Commission (NDRC), China’s top economic planning agency, signaling potential alignment with national satellite internet and remote sensing infrastructure goals.

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink —which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.

METHODS:

We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.