Toggle light / dark theme

A groundbreaking nanosurgical tool — about 500 times thinner than a human hair — could be transformative for cancer research and give insights into treatment resistance that no other technology has been able to do, according to a new study.

The high-tech double-barrel nanopipette, developed by University of Leeds scientists, and applied to the global medical challenge of cancer, has — for the first time — enabled researchers to see how individual living cancer cells react to treatment and change over time — providing vital understanding that could help doctors develop more effective cancer medication.

The tool has two nanoscopic needles, meaning it can simultaneously inject and extract a sample from the same cell, expanding its potential uses. And the platform’s high level of semi-automation has sped up the process dramatically, enabling scientists to extract data from many more individual cells, with far greater accuracy and efficiency than previously possible, the study shows.

In the last decade, thanks to advances in AI, the internet of things, machine learning and sensor technologies, the fantasy of digital twins has taken off. BMW has created a digital twin of a production plant in Bavaria. Boeing is using digital twins to design airplanes. The World Economic Forum hailed digital twins as a key technology in the “fourth industrial revolution.” Tech giants like IBM, Nvidia, Amazon and Microsoft are just a few of the big players now providing digital twin capabilities to automotive, energy and infrastructure firms.

The inefficiencies of the physical world, so the sales pitch goes, can be ironed out in a virtual one and then reflected back onto reality. Test virtual planes in virtual wind tunnels, virtual tires on virtual roads. “Risk is removed” reads a recent Microsoft advertorial in Wired, and “problems can be solved before they happen.”

All of a sudden, Dirk Helbing and Javier Argota Sánchez-Vaquerizo wrote in a 2022 paper, “it has become an attractive idea to create digital twins of everything.” Cars, trains, ships, buildings, airports, farms, power plants, oil fields and entire supply chains are all being cloned into high-fidelity mirror images made of bits and bytes. Attempts are being undertaken to twin beaches, forests, apple orchards, tomato plants, weapons and war zones. As beaches erode, forests grow and bombs explode, so too will their twins, watched closely by technicians for signals to improve outcomes in the real world.

“They remove some of the magic,” said Dimitris Papailiopoulos, a machine learning researcher at the University of Wisconsin, Madison. “That’s a good thing.”

Training Transformers

Large language models are built around mathematical structures called artificial neural networks. The many “neurons” inside these networks perform simple mathematical operations on long strings of numbers representing individual words, transmuting each word that passes through the network into another. The details of this mathematical alchemy depend on another set of numbers called the network’s parameters, which quantify the strength of the connections between neurons.

Apple quietly submitted a research paper last week related to its work on a multimodal large language model (MLLM) called MM1. Apple doesn’t explain what the meaning behind the name is, but it’s possible it could stand for MultiModal 1.

Being multimodal, MM1 is capable of working with both text and images. Overall, its capabilities and design are similar to the likes of Google’s Gemini or Meta’s open-source LLM Llama 2.

An earlier report from Bloomberg said Apple was interested in incorporating Google’s Gemini AI engine into the iPhone. The two companies are reportedly still in talks to let Apple license Gemini to power some of the generative AI features coming to iOS 18.