Toggle light / dark theme

The world famous Artificial Intelligence designer/expert Hugo de Garis has some horrific views on the future of technology. He demands people listen to his warnings wherever he goes. I thought I’d help him spread his nightmare with Camper Killer Commentary 17 “The Artilect War. The Nightmare of Hugo de Garis”. I hope you enjoy learning about your doom.

Hugo de Garis on AI, the story leading up to where we are now, and the possibilities for AI in the not too distant future. We have seen AI sprint past us in many cognitive domains, and in the coming decades we will likely see AI creep up on human level intelligence in other domains — once this becomes apparent, AI will become a central political issue — and nations will try to out-compete each other in dangerous AI arms-race.
As AI encroaches further into areas of economic usefulness where humans traditionally dominated, how might avoid uselessness and stay relevant? Merge with the machines say’s Hugo.

Many thanks to Forms for the use of the track “Close” — check it out: https://www.youtube.com/watch?v=nFY0JbwrPlE | Bandcamp: https://soundcloud.com/forms308743226

Many thanks for tuning in!

Consider supporting SciFuture by:

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program has successfully completed one experiment and is now moving on to even more difficult off-road landscapes at Camp Roberts, California, for trials set for September 15–27, according to a press release by the organization published last week.

Giving driverless combat vehicles off-road autonomy

The program has stated that its aim is “to give driverless combat vehicles off-road autonomy while traveling at speeds that keep pace with those driven by people in realistic situations.”

The artificial artist Dall-E 2 has now designed the Apple Car.

A hypothetical “AI-generated Apple Car” ingeniously made use of artificial intelligence technology was created by Dall-E 2 in response to a text request by San Francisco-based industrial designer John Mauriello.

Mauriello focuses on advancing his one-of-a-kind craft by utilizing cutting-edge technologies. He typed that he wanted a minimalist sports automobile inspired by a MacBook and a Magic Mouse created out of metal and glass on DALL-E 2, an artificial intelligence system that can create realistic visuals and art from a description. Additionally, he gave the AI instructions to style the design using Jony Ive’s methods, the former head of design at Apple.

😗


On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews, podcasts, conversations, and more.

OpenAI trained Whisper on 680,000 hours of audio data and matching transcripts in 98 languages collected from the web. According to OpenAI, this open-collection approach has led to “improved robustness to accents, background noise, and technical language.” It can also detect the spoken language and translate it to English.

As many as 350,000 open source projects are believed to be potentially vulnerable to exploitation as a result of a security flaw in a Python module that has remained unpatched for 15 years.

The open source repositories span a number of industry verticals, such as software development, artificial intelligence/machine learning, web development, media, security, and IT management.

The shortcoming, tracked as CVE-2007–4559 (CVSS score: 6.8), is rooted in the tarfile module, successful exploitation of which could lead to code execution from an arbitrary file write.

https://youtube.com/watch?v=R0NP5eMY7Q8&feature=share

Quantum algorithms: An algorithm is a sequence of steps that leads to the solution of a problem. In order to execute these steps on a device, one must use specific instruction sets that the device is designed to do so.

Quantum computing introduces different instruction sets that are based on a completely different idea of execution when compared with classical computing. The aim of quantum algorithms is to use quantum effects like superposition and entanglement to get the solution faster.

Source:
Artificial Intelligence vs Artificial General Intelligence: Eric Schmidt Explains the Difference.

https://youtu.be/VFuElWbRuHM

Disclaimer:

Tool use has long been a hallmark of human intelligence, as well as a practical problem to solve for a vast array of robotic applications. But machines are still wonky at exerting just the right amount of force to control tools that aren’t rigidly attached to their hands.

To manipulate said tools more robustly, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with the Toyota Research Institute (TRI), have designed a system that can grasp tools and apply the appropriate amount of force for a given task, like squeegeeing up liquid or writing out a word with a pen.

The system, dubbed Series Elastic End Effectors, or SEED, uses soft bubble grippers and embedded cameras to map how the grippers deform over a six-dimensional space (think of an airbag inflating and deflating) and apply force to a tool. Using six degrees of freedom, the object can be moved left and right, up or down, back and forth, roll, pitch, and yaw. The closed-loop controller—a self-regulating system that maintains a desired state without —uses SEED and visuotactile feedback to adjust the position of the robot arm in order to apply the desired force.

Those who are venturing into the architecture of the metaverse, have already asked themselves this question. A playful environment where all formal dreams are possible, where determining aspects for architecture such as solar orientation, ventilation, and climate will no longer be necessary, where – to Louis Kahn’s despair – there is no longer a dynamic of light and shadow, just an open and infinite field. Metaverse is the extension of various technologies, or even some call them a combination of some powerful technologies. These technologies are augmented reality, virtual reality, mixed reality, artificial intelligence, blockchain, and a 3D world.

This technology is still under research. However, the metaverse seems to make a significant difference in the education domain. Also, its feature of connecting students across the world with a single metaverse platform may bring a positive change. But, the metaverse is not only about remote learning. It is much more than that.

Architecture emerged on the construction site, at a time when there was no drawing, only experimentation. Over time, thanks to Brunelleschi and the Florence dome in the 15th century, we witnessed the first detachment from masonry, a social division of labor from which liberal art and mechanical art emerge. This detachment generated different challenges and placed architecture on an oneiric plane, tied to paper. In other words, we don’t build any structures, we design them. Now, six centuries later, it looks like we are getting ready to take another step away from the construction site, abruptly distancing ourselves from engineering and construction.