Toggle light / dark theme

Nanorobots clean up contaminated water

Chemists have created nanorobots propelled by magnets that remove pollutants from water. The invention could be scaled up to provide a sustainable and affordable way of cleaning up contaminated water in treatment plants.

Martin Pumera at the University of Chemistry and Technology, Prague, in the Czech Republic and his colleagues developed the nanorobots by using a temperature-sensitive polymer material and iron oxide. The polymer acts like tiny hands that can pick up and dispose of pollutants in the water, while the iron oxide makes the nanorobots magnetic. The researchers also added oxygen and hydrogen atoms to the iron oxide that can attach onto target pollutants.

The robots are about 200 nanometres wide and are powered by magnetic fields, which allow the team to control their movements.

Microsoft and Nvidia partner to build AI supercomputer in the cloud

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

A supercomputer, providing massive amounts of computing power to tackle complex challenges, is typically out of reach for the average enterprise data scientist. However, what if you could use cloud resources instead? That’s the rationale that Microsoft Azure and Nvidia are taking with this week’s announcement designed to coincide with the SC22 supercomputing conference.

Nvidia and Microsoft announced that they are building a “massive cloud AI computer.” The supercomputer in question, however, is not an individually-named system, like the Frontier system at the Oak Ridge National Laboratory or the Perlmutter system, which is the world’s fastest Artificial Intelligence (AI) supercomputer. Rather, the new AI supercomputer is a set of capabilities and services within Azure, powered by Nvidia technologies, for high performance computing (HPC) uses.

Tesla Will Get Robotaxi at Scale 5–10 Years Before Competitors $TSLA

Self-driving car company Argo AI failure when Ford and VW pulled the plug after spending over $3 billion. It is big evidence that Lidar-dependent self-driving has a long way to go. All of the self-driving car companies except Tesla and Comma were using Lidar. Ford said removing the driver is over 5 years away. Most robotaxi players are dependent upon removing the driver for their business model to work enough to get to serious scale. 5+ years to get to the true starting point and 5+ years to scale translates to an 8-year lead for Tesla if Tesla solves robotaxi in 2 years. Uber had a 2.5 year lead over Lyft and that meant three times the market share for Uber.

Legal Personhood For AI Is Taking A Sneaky Path That Makes AI Law And AI Ethics Very Nervous Indeed

Would you like to see the classic magic trick of a rabbit being pulled out of a hat? I hope so since you are about to witness something ostensibly magical, though it has to do with Artificial Intelligence (AI) rather than rabbits and hats.

Here’s the deal.


A lot of debate takes place about whether we ought to recognize AI with some form of legal personhood. Surprisingly, some believe that we can already shoehorn AI into legal personhood by a bit of corporate legal wrangling. See what this is all about.

Cellular Automata and Rule 30 (Stephen Wolfram) | AI Podcast Clips

Full episode with Stephen Wolfram (Apr 2020): https://www.youtube.com/watch?v=ez773teNFYA
Clips channel (Lex Clips): https://www.youtube.com/lexclips.
Main channel (Lex Fridman): https://www.youtube.com/lexfridman.
(more links below)

Podcast full episodes playlist:

Podcasts clips playlist:

Podcast website:
https://lexfridman.com/ai.

Podcast on Apple Podcasts (iTunes):
https://apple.co/2lwqZIr.

Podcast on Spotify:

The Extended Mind | Andy Clark

Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember?

In their famous 1998 paper “The Extended Mind,” philosophers Andy Clark and David J. Chalmers posed those questions and answered them provocatively: cognitive processes “ain’t all in the head.” The environment has an active role in driving cognition; cognition is sometimes made up of neural, bodily, and environmental processes.

From where he started in cognitive science in the early nineteen-eighties, taking an interest in A.I., professor Clark has moved quite far. “I was very much on the machine-functionalism side back in those days,” he says. “I thought that mind and intelligence were quite high-level abstract achievements where having the right low-level structures in place didn’t really matter.”

Each step he took, from symbolic A.I. to connectionism, from connectionism to embodied cognition, and now to predictive processing, took Clark farther away from the idea of cognition as a disembodied language and toward thinking of it as fundamentally shaped by the particular structure of its animal body, with its arms and its legs and its neuronal brain. He had come far enough that he had now to confront a question: If cognition was a deeply animal business, then how far could artificial intelligence go?

Clark knew that the roboticist Rodney Brooks had recently begun to question a core assumption of the whole A.I. project: that minds could be built of machines. Brooks speculated that one of the reasons A.I. systems and robots appeared to hit a ceiling at a certain level of complexity was that they were built of the wrong stuff—that maybe the fact that robots were not flesh made more of a difference than he’d realized.

Clark couldn’t decide what he thought about this. On the one hand, he was no longer a machine functionalist, exactly: he no longer believed that the mind was just a kind of software that could run on hardware of various sorts. On the other hand, he didn’t believe, and didn’t want to believe, that a mind could be constructed only out of soft biological tissue. He was too committed to the idea of the extended mind—to the prospect of brain-machine combinations, to the glorious cyborg future—to give it up. In a way, though, the structure of the brain itself had some of the qualities that attracted him to the extended-mind view in the first place: it was not one indivisible thing but millions of quasi-independent things, which worked seamlessly together while each had a kind of existence of its own.

/* */