Toggle light / dark theme

Go to https://ground.news/sabine to get 40% Off the Vantage plan and see through sensationalized reporting. Stay fully informed on events around the world with Ground News.

Geoffrey Hinton recently ignited a heated debate with an interview in which he says he is very worried that we will soon lose control over superintelligent AI. Meta’s AI chief Yann LeCun disagrees. I think they’re both wrong. Let’s have a look.

🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.
🖼️ On instagram ➜ / sciencewtg.

#science #sciencenews #artificialintelligence #ai #technews #tech #technology

What You Should Know:

– A glimmer of hope emerged today for rectal cancer patients as a collaborative effort between Case Western Reserve University (CWRU), Cleveland Clinic, and University Hospitals (UH) received a $2.78 million grant over five years from the National Institutes of Health and National Cancer Institute. This grant will fuel research leveraging artificial intelligence (AI) to personalize treatment for rectal cancer patients.

– The new research effort signifies a significant step forward in the fight against rectal cancer. By harnessing the power of AI, researchers are on the path to developing more precise treatment strategies, ultimately improving patient outcomes and quality of life.

Star Trek: The Next Generation looks at sentience as consciousness, self-awareness, and intelligence—and that was actually pretty spot on. Sentience is the innate human ability to experience feelings and sensations without association or interpretation. “We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” Ishaani Priyadarshini, a Cybersecurity Ph.D. candidate from the University of Delaware, tells Popular Mechanics.

💡AI is very clever and able to mimic sentience, but never actually become sentient itself.

The very idea of consciousness has been heavily contested in philosophy for decades. The 17th-century philosopher René Descartes famously said, “I think therefore I am.” A simple statement on the surface, but it was the result of his search for a statement that couldn’t be doubted. Think about it: he couldn’t doubt his existence as he was the one doubting himself in the first place.

Researchers at the Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) have demonstrated that under specific conditions, humans can treat robots as co-authors of the results of their actions. The condition that enables this phenomenon is that a robot behaves in a human-like, social manner. Engaging in gaze contact and participating in a common emotional experience, such as watching a movie, are the key.

The study was published in Science Robotics and paves the way for understanding and designing the optimal circumstances for humans and robots to collaborate in the same environment.

The research study has been coordinated by Agnieszka Wykowska, head of IIT’s Social Cognition in Human-Robot Interaction lab in Genova, and a on a project titled “Intentional Stance for Social Attunement,” which addresses the question of when and under what conditions people treat robots as intentional agents.

Several neuronal mechanisms have been proposed to account for the formation of cognitive abilities through postnatal interactions with the physical and sociocultural environment. Here, we introduce a three-level computational model of information processing and acquisition of cognitive abilities. We propose minimal architectural requirements to build these levels, and how the parameters affect their performance and relationships. The first sensorimotor level handles local nonconscious processing, here during a visual classification task. The second level or cognitive level globally integrates the information from multiple local processors via long-ranged connections and synthesizes it in a global, but still nonconscious, manner. The third and cognitively highest level handles the information globally and consciously. It is based on the global neuronal workspace (GNW) theory and is referred to as the conscious level. We use the trace and delay conditioning tasks to, respectively, challenge the second and third levels. Results first highlight the necessity of epigenesis through the selection and stabilization of synapses at both local and global scales to allow the network to solve the first two tasks. At the global scale, dopamine appears necessary to properly provide credit assignment despite the temporal delay between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustained representation within the GNW in the absence of sensory input. Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both local and global scales, the balanced excitatory/inhibitory ratio increases performance. We discuss the plausibility of the model in both neurodevelopmental and artificial intelligence terms.

Keywords: artificial consciousness; cognitive architecture; global neuronal workspace; synaptic epigenesis.

PubMed Disclaimer

Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep-learning techniques. We propose a roadmap based on unsupervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal Global Latent Workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications.

Keywords: attention; broadcast; consciousness; grounding; latent space; multimodal translation.

Copyright © 2021 Elsevier Ltd. All rights reserved.

Philosophical and theoretical debates on the multiple realisability of the cognitive have historically influenced discussions of the possible systems capable of instantiating complex functions like memory, learning, goal-directedness, and decision-making. These debates have had the corollary of undermining, if not altogether neglecting, the materiality and corporeality of cognition-treating material, living processes as “hardware” problems that can be abstracted out and, in principle, implemented in a variety of materials-in particular on digital computers and in the form of state-of-the-art neural networks. In sum, the matter in se has been taken not to matter for cognition. However, in this paper, we argue that the materiality of cognition-and the living, self-organizing processes that it enables-requires a more detailed assessment when understanding the nature of cognition and recreating it in the field of embodied robotics. Or, in slogan form, that the matter matters for cognitive form and function. We pull from the fields of Active Matter Physics, Soft Robotics, and Basal Cognition literature to suggest that the imbrication between material and cognitive processes is closer than standard accounts of multiple realisability suggest. In light of this, we propose upgrading the notion of multiple realisability from the standard version-what we call 1.0-to a more nuanced conception 2.0 to better reflect the recent empirical advancements, while at the same time averting many of the problems that have been raised for it. These fields are actively reshaping the terrain in which we understand materiality and how it enables, mediates, and constrains cognition. We propose that taking the materiality of our embodied, precarious nature seriously furnishes an important research avenue for the development of embodied robots that autonomously value, engage, and interact with the environment in a goal-directed manner, in response to existential needs of survival, persistence, and, ultimately, reproduction. Thus, we argue that by placing further emphasis on the soft, active, and plastic nature of the materials that constitute cognitive embodiment, we can move further in the direction of autonomous embodied robots and Artificial Intelligence.

Keywords: active matter physics; artificial intelligence; basal cognition; embodied cognition; fine-grained functionalism; functionalism; multiple realisability; soft robotics.

Copyright © 2022 Harrison, Rorot and Laukaityte.

They were gradually replaced by AI.


A hot potato: CEOs, bosses, and the those who make the technology love to assure people that artificial intelligence isn’t going to replace everyone’s jobs; it will merely augment them – working alongside humans to make life easier. Yet we keep hearing stories like the one about a writer whose employer fired his 60-person team and replaced them with an AI.

A writer using the pseudonym Benjamin Miller told the BBC that his company wanted to use AI to cut costs in early 2023. He led a team of more than 60 writers and editors who published blog posts and articles to promote a tech company that packages and resells data.

The new workflow involved feeding headlines into an AI model that would generate an outline based on the title. The writing team would then create articles based on these ideas, rather than coming up with their own, with Miller editing the final pieces.