The Indus Valley script dates back around 4,000 years but has yet to be deciphered. Can AI help decode it?
Recent technological advances have opened new possibilities for the development of advanced medical devices, including tiny robots that can safely move inside the human body. Some of these systems could help to simplify complex medical procedures, including delicate surgeries and the targeted delivery of drugs to specific sites.
THE MINIMAX lab at University of Texas (UT) Austin specializes in the development of tiny robots for medical, environmental, and other applications. In a recent preprint paper on arXiv, researchers from this lab introduced a new 3Dprintable and magnetically steerable capsule robot that could potentially help to diagnose and treat some gastrointestinal (GI) conditions.
“My motivation for GI health monitoring is deeply personal,” Fangzhou Xia, director of the MINIMAX lab at UT Austin and senior author of the paper, told Medical Xpress. “In 2022, when I was a postdoc at MIT, I experienced a severe GI medical episode involving repeated gallstone-induced bile duct blockage that ultimately required gallbladder removal surgery.
Layoffs, consolidation, streaming losses, artificial intelligence and the rise of the creator economy are reshaping Hollywood, raising questions about whether the industry is just hitting a rough patch or in terminal decline.
#hollywood #film #tv ——– Like this video? Subscribe: https://www.youtube.com/Bloomberg?sub_confirmation=1
Get unlimited access to Bloomberg.com for just $1.99 your first month: https://www.bloomberg.com/subscriptions?in_source=YoutubeOriginals Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.
Subscribe for business news, but not as you’ve known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.
Visit our partner channel Bloomberg News for global news and insight in an instant.
China just released DuClaw, a new platform that lets anyone run OpenClaw AI agents instantly from a web browser without dealing with deployment, servers, or API keys. At the same time, researchers at Stanford introduced OpenJarvis, a framework that allows personal AI assistants to run entirely on your own computer instead of the cloud. Meanwhile Google is using Gemini to build the largest flash flood dataset ever created, mapping millions of disaster events across the planet. And a new toolkit called gstack is turning AI coding into something far more autonomous, allowing AI systems to plan software, test applications, and review code automatically.
📩 Brand Deals & Partnerships: [email protected].
✉ General Inquiries: [email protected].
🧠 What You’ll See.
Baidu launches DuClaw to run OpenClaw AI agents directly from a browser.
SOURCE: https://pandaily.com/baidu-ai-cloud-l… introduces OpenJarvis for fully local AI assistants SOURCE: https://www.marktechpost.com/2026/03/.… Google uses Gemini to build the largest flash flood dataset ever created SOURCE: https://www.wsj.com/articles/google-t… gstack toolkit organizes AI into automated software development workflows SOURCE: https://www.producthunt.com/products/.… 🚨 Why It Matters These developments show how quickly artificial intelligence is moving toward more autonomous systems. From browser based AI agents that run instantly, to personal assistants that operate entirely on local machines, the way people interact with AI is changing rapidly. At the same time, large scale AI systems are being used to analyze global disasters and predict floods, while new developer tools are allowing AI to plan, test, and review software almost like an engineering team. #ai #artificialintelligence #ainews.
Stanford introduces OpenJarvis for fully local AI assistants.
SOURCE: https://www.marktechpost.com/2026/03/.…
Google uses Gemini to build the largest flash flood dataset ever created.
SOURCE: https://www.wsj.com/articles/google-t…
Please see my latest Forbes article: The Rapid Trajectory of Artificial Intelligence: From Machine Learning Foundations to Generative Creativity, Agentic Autonomy, Human Augmentation, Neuromorphic Intelligence, and the Cyborg Horizon.
Thanks and have a great weekend!
#artificialintelligence #tech #ai #future @forbes
Artificial intelligence continues to evolve at an accelerating pace, transitioning from narrow, data-driven tools to systems capable of reasoning, and autonomous action.
Whole-brain cell mapping using AI
The researchers developed a highly multiplexed whole-mount staining technique, utilizing the repeated application of fluorescence in situ hybridization.
The technique called mFISH3D for multiplexed mRNA staining in whole mouse organs and human tissue.
The technique helps to visualize 10 types of mRNAs in an intact mouse brain.
This workflow provides a robust approach to studying selective cell vulnerability in disease. sciencenewshighlights ScienceMission https://sciencemission.com/Artificial-intelligence-driven-wh…ll-mapping
Murakami et al. developed mFISH3D for multiplexed mRNA staining in whole-mouse organs and human tissue. Analysis of the stained mouse brains using the AI-driven ZenCell platform reveals unique cell populations activated by pharmacological perturbation. This workflow provides a robust approach to studying selective cell vulnerability in disease.
MLIP calculations successfully identify suitable dopants for a novel photocatalytic material, report researchers from the Institute of Science Tokyo. As demonstrated in their study, published in the Journal of the American Chemical Society, a materials informatics approach could predict which ions can be stably introduced into orthorhombic Sn3O4, a promising and recently discovered photocatalytic tin oxide.
Their experiments revealed that aluminum-doped samples achieved 16 times greater hydrogen production than the undoped material, paving the way for next-generation clean energy applications.
Building a sustainable hydrogen economy requires clean and efficient ways to produce hydrogen at scale. One particularly attractive approach is photocatalysis—using materials called photocatalysts to split water into hydrogen and oxygen utilizing sunlight.
Current vision systems for robots and drones rely on 3D sensors that, although powerful, do not always keep up with the fast-paced, unpredictable movement of the real world. These systems often struggle to measure speed instantly or are too bulky and expensive for everyday use. Now, in a paper published in the journal Nature, scientists report how they have developed a 4D imaging sensor on a chip that creates 3D maps of an environment while simultaneously tracking the speed of moving objects.
The researchers built a focal plane array (FPA), a physical grid of 61,952 stationary pixels etched onto a single silicon chip. Each one is a tiny sensor that emits laser light toward a scene and detects the reflected signal.
To “see” its surroundings, laser light from an external source is fed into the chip. This light is routed across the chip through a network of optical switches that sequentially direct it to groups of pixels. Each pixel then uses a technique called FMCW LiDAR to measure the returning signal, which is later processed to determine distance and speed. In many LiDAR systems, one set of pixels sends the light, and another receives it, but here, all pixels both send and receive, making the system much more compact.