Toggle light / dark theme

Tiny Robots Detect and Treat Cancer by Traveling Deep into the Lungs

A tiny robot which can travel deep into the lungs to detect and treat the first signs of cancer has been developed by researchers at the University of Leeds. The ultra-soft tentacle, which measures just two millimeters in diameter, and is controlled by magnets, can reach some of the smallest bronchial tubes and could transform the treatment of lung cancer. The researchers tested the magnetic tentacle robot on the lungs of a cadaver and found that it can travel 37 percent deeper than the standard equipment and leads to less tissue damage. It paves the way for a more accurate, tailored, and far less invasive approach to treatment.

The work is published in Nature Engineering Communications in the paper, “Magnetic personalized tentacles for targeted photothermal cancer therapy in peripheral lungs.

“This new approach has the advantage of being specific to the anatomy, softer than the anatomy and fully-shape controllable via magnetics,” notes Pietro Valdastri, PhD, director of the Science and Technologies Of Robotics in Medicine (STORM) Lab at the University of Leeds. “These three main features have the potential to revolutionize navigation inside the body.”

Substance Dualism (Part 1 of 2) [HD]

Examining the view that mind and body are separate substances.

Note at 7:08 A reductio ad absurdum argument (one which attributes a machine with thought purely for the sake of argument, to demonstrate that genuinely absurd / contradictory consequences follow) would be valid. We can see immediately that Plantinga’s thought experiment doesn’t achieve this: failure to discern how a thinking machine is thinking indicates only lack of comprehension, not a genuine absurdity / contradiction.

But his use of Leibniz’ scenario isn’t valid. Leibniz doesn’t just propose a thinking machine, but one we can enter and inspect. If physical thinking things are impossible — as Plantinga claims — then whatever machine we conjure up in our imagination to enter and inspect, it can’t be a genuine physical thinking thing, just as it would be impossible to inspect a machine that prints square circles. (Besides, if there’s truly nothing we could be faced with inside the machine that would signal thought, it makes no sense to ask us to inspect it, since no inspection could help us discern thinking machines from non-thinking ones anyway.) It is this sense in which Plantinga cannot use thinking machines to show machines can’t think. His argument is incoherent. It is certainly not a valid reductio ad absurdum.

Selected Resources:

Humanoid robot Asimo demonstration:

Descartes, R — Discourse on the Method (1637)

Machine learning enables discovery of DNA-stabilized silver nanoclusters

DNA can do more than pass genetic code from one generation to the next. For nearly 20 years, scientists have known of the molecule’s ability to stabilize nanometer-sized clusters of silver atoms. Some of these structures glow visibly in red and green, making them useful in a variety of chemical and biosensing applications.

Stacy Copp, UCI assistant professor of materials science and engineering, wanted to see if the capabilities of these tiny fluorescent markers could be stretched even further—into the near-infrared range of the electromagnetic spectrum—to give bioscience researchers the power to see through living cells and even centimeters of biological tissue, opening doors to enhanced methods of disease detection and treatment.

“There is untapped potential to extend fluorescence by DNA-stabilized silver nanoclusters into the near-infrared region,” she says. “The reason that’s so interesting is because our biological tissues and fluids are much more transparent to near-infrared light than to visible light.”

This new tool could protect your pictures from AI manipulation

PhotoGuard, created by researchers at MIT, alters photos in ways that are imperceptible to us but stops AI systems from tinkering with them.

Remember that selfie you posted last week? There’s currently nothing stopping someone taking it and editing it using powerful generative AI systems. Even worse, thanks to the sophistication of these systems, it might be impossible to prove that the resulting image is fake.

The good news is that a new tool, created by researchers at MIT, could prevent this.

AI Chatbots Are The New Job Interviewers

A few examples: McDonald’s chatbot recruiter “Olivia” cleared Claypool for an in-person interview, but then failed to schedule it because of technical issues. A Wendy’s bot managed to schedule her for an in-person interview but it was for a job she couldn’t do. Then a Hardees chatbot sent her to interview with a store manager who was on leave — hardly a seamless recruiting strategy.

“I showed up at Hardees and they were kind of surprised. The crew operating the restaurant had no idea what to do with me or how to help me,” Claypool, who ultimately took a job elsewhere, told Forbes. “It seemed like a more complicated thing than it had to be,” she said. (McDonald’s and Hardees didn’t respond to a comment request. A Wendy’s spokesperson told Forbes the bot creates “hiring efficiencies,” adding “innovation is our DNA.”)

Micron Announces “Second Generation” HBM3 Memory For Generative AI Workloads

Since the first High-Bandwidth Memory (HBM) stacks were introduced in 2013, these stacked memory chiplets have carved out a new, high-performance niche for DRAM in the memory hierarchy. The first products to incorporate HBM started to appear in 2015. You will now find HBM stacks in high-end CPUs, GPUs, and FPGAs, where performance matters more than cost to the end customer. Although HBM is not as fast as SRAM, it is faster than bulk DDR memory, and it’s getting even faster. Micron has just announced what the company is calling a “second-generation” (Gen 2) HBM3 DRAM that adheres to the semiconductor industry’s “bigger, better, faster” mantra. Because it integrates higher-density DRAM die, Micron’s HBM3 Gen 2 can be 50 percent bigger (higher capacity) than existing HBM3 DRAM available from other memory vendors, with 2.5x better performance/power ratings and 50 percent faster speed (1.2Tbytes/second). That’s the DRAM industry’s equivalent of a trifecta.

HBM is akin to a chiplet skyscraper. Each HBM stack consists of a logic die, which contains a system interface and DRAM control, with multiple DRAM die stacked on top of the logic die. The entire stack is designed to be incorporated into an IC package along with larger semiconductor die and other, smaller chiplets. You’ll find earlier HBM generations incorporated into server CPUs from AMD and Intel, FPGAs from AMD and Intel, and GPUs from AMD, Intel, and Nvidia. The most recent announcement of a GPU containing HBM is AMD’s MI300X GPU, announced by the company’s CEO Lisa Su in June. (See “AMD Hops On The Generative AI Bandwagon With Instinct MI300X.”) Intel previously announced HBM use in the company’s Data Center GPU Max Series, the data center GPU formerly known as Ponte Vecchio. Nvidia’s H100 Hopper GPU also incorporates HBM memory stacks.

Micron has now entered the HBM3 race with what the company is calling a “second-generation” HBM3 design, because Micron’s version of an HBM3 stack is arriving about a year after competitive products, but is based on a faster, denser, more advanced semiconductor process node. The DRAM die in this Micron HBM3 stack is being fabricated with the company’s 1β process technology, which the company announced last year. During that announcement, the company claimed that the 1β process increased DRAM bit density by 35 percent and decreased power consumption by 15 percent, when compared to the company’s 1α node. At the time, Micron announced plans to use its 1β process node for manufacturing DDR5 and LPDDR5 DRAM. The company has now stated that they will use the same process node for manufacturing DRAM die for its HBM3 stacks.

70% Of Generative AI Startups Rely On Google Cloud, AI Capabilities

Alphabet’s Q2 2023 earnings call highlighted the growing adoption of generative AI across the company’s cloud and product offerings.

CEO Sundar Pichai emphasized how over 70% of generative AI startups rely on Google’s cloud infrastructure and AI capabilities. This shows the traction for next-gen technology among emerging companies looking to build new services powered by Google Bard and other models.

“Our AI-optimized infrastructure is a leading platform for training and serving generative AI models. More than 70% of gen AI unicorns are Google Cloud customers, including Cohere, Jasper, Typeface, and many more,” he said.

Hypermodal AI Converges Predictive, Causal And Generative AI

In software application development environments, the consensus is gravitating towards the use of AI as a helping and testing mechanism, rather than it being wholly offered the chance to create software code in and of itself. The concept here is that if so-called citizen developer business laypeople start creating code with software robots, they will never be able to wield the customization power (and ability to cover security risks) that hard-core software developers have.

As we now grow with AI and start to become more assured in terms of where its impact should be felt, we may now logically look to the whole spectrum of automation that it offers. This involves the concept of so-called hypermodal AI i.e. intelligence capable of working in different ‘modes’, some of which will predict, some of which will help determine and some of which will generate.

Today describing itself as unified observability and security platform company (IT vendors are fond of changing their opening ‘elevator sell’ line every few years), Dynatrace has now expanded its Davis AI engine to create hypermodal AI that converges fact-based predictive AI, with causal AI insights with new generative AI capabilities.

/* */