A Bank of America analyst says quantum computing technology ‘will reset everything,’ including the future of AI.
There may be a new artificial intelligence-driven tool to turbocharge scientific discovery: virtual labs.
A personalized brain stimulation system powered by artificial intelligence (AI) that can safely enhance concentration from home has been developed by researchers from the University of Surrey, the University of Oxford and Cognitive Neurotechnology. Designed to adapt to individual characteristics, the system could help people improve focus during study, work, or other mentally demanding tasks.
Published in npj Digital Medicine, the study is based on a patented approach that uses non-invasive brain stimulation alongside adaptive AI to maximize its impact.
The technology uses transcranial random noise stimulation (tRNS)—a gentle and painless form of electrical brain stimulation—and an AI algorithm that learns to personalize stimulation based on individual features, including attention level and head size.
Refraction—the bending of light as it passes through different media—has long been constrained by physical laws that prevent independent control over how light waves along different directions bend. Now, UCLA researchers have developed a new class of passive materials that can be structurally engineered to “program” refraction, enabling arbitrary control over the bending of light waves.
In a study published in Nature Communications, a team led by Dr. Aydogan Ozcan, the Chancellor’s Professor of Electrical & Computer Engineering at UCLA, has introduced a novel device called a refractive function generator (RFG) that can independently tailor the output direction of refracted light for each input direction. This device allows light to be steered, filtered, or redirected according to custom-designed rules—far beyond what standard materials or traditional metasurfaces can achieve.
Standard refraction, described by Snell’s law, links the input and output directions of light using fixed material properties. Even advanced metasurface designs only allow limited tunability of refraction.
A vulnerability in Google’s Gemini CLI allowed attackers to silently execute malicious commands and exfiltrate data from developers’ computers using allowlisted programs.
The flaw was discovered and reported to Google by the security firm Tracebit on June 27, with the tech giant releasing a fix in version 0.1.14, which became available on July 25.
Gemini CLI, first released on June 25, 2025, is a command-line interface tool developed by Google that enables developers to interact directly with Google’s Gemini AI from the terminal.
Using 3D holograms polished by artificial intelligence, researchers introduce a lean, eyeglass-like 3D headset that they say is a significant step toward passing the “Visual Turing Test.”
“In the future, most virtual reality displays will be holographic,” said Gordon Wetzstein, a professor of electrical engineering at Stanford University, holding his lab’s latest project: a virtual reality display that is not much larger than a pair of regular eyeglasses. “Holography offers capabilities that we can’t get with any other type of display in a package that is much smaller than anything on the market today.”
Holography is a Nobel Prize-winning 3D display technique that uses both the intensity of light reflecting from an object, as with a traditional photograph, and the phase of the light (the way the waves synchronize), to produce a hologram, a highly realistic three-dimensional image of the original object.
AI experts warn that AI could eliminate millions of jobs, and advocates for Universal Basic Income believe such a system might become necessary.