Toggle light / dark theme

Virtualphotoo/iStock.

Since ferrofluids are easy to control and offer great flexibility with fast motion, they are often preferred by scientists for producing shape-shifting soft robots. In 2015, a team of researchers in South Korea created ferrofluid soft robots capable of mimicking an amoeba’s movements. Another group of researchers from Arizona State University developed a miniature shape-altering robot in 2021 using ferrofluids.

There has been a lot of buzz about quantum computers and for good reason. The futuristic computers are designed to mimic what happens in nature at microscopic scales, which means they have the power to better understand the quantum realm and speed up the discovery of new materials, including pharmaceuticals, environmentally friendly chemicals, and more. However, experts say viable quantum computers are still a decade away or more. What are researchers to do in the meantime?

A new Caltech-led study in the journal Science describes how tools, run on , can be used to make predictions about and thus help researchers solve some of the trickiest physics and chemistry problems. While this notion has been shown experimentally before, the new report is the first to mathematically prove that the method works.

“Quantum computers are ideal for many types of physics and materials science problems,” says lead author Hsin-Yuan (Robert) Huang, a graduate student working with John Preskill, the Richard P. Feynman Professor of Theoretical Physics and the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology (IQIM). “But we aren’t quite there yet and have been surprised to learn that classical machine learning methods can be used in the meantime. Ultimately, this paper is about showing what humans can learn about the physical world.”

Nanoengineers at the University of California San Diego have developed microscopic robots, called microrobots, that can swim around in the lungs, deliver medication and be used to clear up life-threatening cases of bacterial pneumonia.

In mice, the microrobots safely eliminated pneumonia-causing bacteria in the lungs and resulted in 100% survival. By contrast, untreated mice all died within three days after infection.

The results are published Sept. 22 in Nature Materials.

The world famous Artificial Intelligence designer/expert Hugo de Garis has some horrific views on the future of technology. He demands people listen to his warnings wherever he goes. I thought I’d help him spread his nightmare with Camper Killer Commentary 17 “The Artilect War. The Nightmare of Hugo de Garis”. I hope you enjoy learning about your doom.

Hugo de Garis on AI, the story leading up to where we are now, and the possibilities for AI in the not too distant future. We have seen AI sprint past us in many cognitive domains, and in the coming decades we will likely see AI creep up on human level intelligence in other domains — once this becomes apparent, AI will become a central political issue — and nations will try to out-compete each other in dangerous AI arms-race.
As AI encroaches further into areas of economic usefulness where humans traditionally dominated, how might avoid uselessness and stay relevant? Merge with the machines say’s Hugo.

Many thanks to Forms for the use of the track “Close” — check it out: https://www.youtube.com/watch?v=nFY0JbwrPlE | Bandcamp: https://soundcloud.com/forms308743226

Many thanks for tuning in!

Consider supporting SciFuture by:

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program has successfully completed one experiment and is now moving on to even more difficult off-road landscapes at Camp Roberts, California, for trials set for September 15–27, according to a press release by the organization published last week.

Giving driverless combat vehicles off-road autonomy

The program has stated that its aim is “to give driverless combat vehicles off-road autonomy while traveling at speeds that keep pace with those driven by people in realistic situations.”

The artificial artist Dall-E 2 has now designed the Apple Car.

A hypothetical “AI-generated Apple Car” ingeniously made use of artificial intelligence technology was created by Dall-E 2 in response to a text request by San Francisco-based industrial designer John Mauriello.

Mauriello focuses on advancing his one-of-a-kind craft by utilizing cutting-edge technologies. He typed that he wanted a minimalist sports automobile inspired by a MacBook and a Magic Mouse created out of metal and glass on DALL-E 2, an artificial intelligence system that can create realistic visuals and art from a description. Additionally, he gave the AI instructions to style the design using Jony Ive’s methods, the former head of design at Apple.