Toggle light / dark theme

*Intelligence Beyond the Brain: morphogenesis as an example of the scaling of basal cognition*

*Description:*
Each of us takes the remarkable journey from physics to mind: we start life as a quiescent oocyte (collection of chemical reactions) and slowly change and acquire an advanced, centralized mind. How does unified complex cognition emerge from the collective intelligence of cells? In this talk, I will use morphogenesis to illustrate how evolution scales cognition across problem spaces. Embryos and regenerating organs produce very complex, robust anatomical structures and stop growth and remodeling when those structures are complete. One of the most remarkable things about morphogenesis is that it is not simply a feed-forward emergent process, but one that has massive plasticity: even when disrupted by manipulations such as damage or changing the sizes of cells, the system often manages to achieve its morphogenetic goal. How do cell collectives know what to build and when to stop? Constructing and repairing anatomies in novel circumstances is a remarkable example of the collective intelligence of a biological swarm. I propose that a multi-scale competency architecture is how evolution exploits physics to achieve robust machines that solve novel problems. I will describe what is known about developmental bioelectricity — a precursor to neurobiology which is used for cognitive binding in biological collectives, that scales their intelligence and the size of the goals they can pursue. I will also discuss the cognitive light cone model, and conclude with examples of synthetic living machines — a new biorobotics platform that uses some of these ideas to build novel primitive intelligences. I will end by speculating about ethics, engineering, and life in a future that integrates deeply across biological and synthetic agents.

I had quite a thought-provoking discussion on modern education with Boris Zimin, the head of the Zimin Foundation, which funds education and research.


The Zimin Foundation is a non-profit organization established by the Zimin family to aid education and science. The Foundation partners with distinguished universities and funds research and educational projects that combine academic excellence with high potential for positive, real-world impact. Since the start of the Russian invasion of Ukraine, the Zimin Foundation has been supporting researchers and students affected by the war. I spoke with philanthropist Boris Zimin, the head of the Zimin Foundation, about his perspective on modern education.

Julia Brodsky: From your experience working with various educational funds and organizations, what do you think should be the emphasis of modern education?

Boris Zimin: While most of the world is prioritizing STEM these days, I have been contemplating the necessity of a comprehensive humanitarian and ethical emphasis in education. Consider the example of education in Soviet Russia, which was well-known around the world for providing a solid STEM background. The goal of Soviet education was to raise workers and soldiers ready for building the “communist future,” and the questions of good and evil never arose — the pre-made answers to those questions were the prerogative of the ruling communist party. Such education produces savvy technical specialists and “effective managers” who are not used to questioning the social and humanitarian consequences of their actions. And that hasn’t changed in the modern, post-Soviet period, as demonstrated by the brutality of Russian forces in Ukraine. In my opinion, education should first and foremost focus on ethics.

What is limb regeneration and what species possess it? How is it achieved? What does this tell us about intelligence in biological systems and how could this information be exploited to develop human therapeutics? Well, in this video, we discuss many of these topics with Dr Michael Levin, Principal Investigator at Tufts University, whose lab studies anatomical and behavioural decision-making at multiple scales of biological, artificial, and hybrid systems.

Find Michael on Twitter — https://twitter.com/drmichaellevin.

Find me on Twitter — https://twitter.com/EleanorSheekey.

Support the channel.

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

So how should companies begin to think about ethical data management? What measures can they put in place to ensure that they are using consumer, patient, HR, facilities, and other forms of data appropriately across the value chain—from collection to analytics to insights?

We began to explore these questions by speaking with about a dozen global business leaders and data ethics experts. Through these conversations, we learned about some common data management traps that leaders and organizations can fall into, despite their best intentions. These traps include thinking that data ethics does not apply to your organization, that legal and compliance have data ethics covered, and that data scientists have all the answers—to say nothing of chasing short-term ROI at all costs and looking only at the data rather than their sources.

In this article, we explore these traps and suggest some potential ways to avoid them, such as adopting new standards for data management, rethinking governance models, and collaborating across disciplines and organizations. This list of potential challenges and remedies is not exhaustive; our research base was relatively small, and leaders could face many other obstacles, beyond our discussion here, to the ethical use of data. But what’s clear from our research is that data ethics needs both more and sustained attention from all members of the C-suite, including the CEO.

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life Institute.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

AI-principles

Yoshua Bengio (MILA), Irina Higgins (DeepMind), Nick Bostrom (FHI), Yi Zeng (Chinese Academy of Sciences), and moderator Joshua Tenenbaum (MIT) discuss possible paths to artificial general intelligence.

The Beneficial AGI 2019 Conference: https://futureoflife.org/beneficial-agi-2019/

After our Puerto Rico AI conference in 2015 and our Asilomar Beneficial AI conference in 2017, we returned to Puerto Rico at the start of 2019 to talk about Beneficial AGI. We couldn’t be more excited to see all of the groups, organizations, conferences and workshops that have cropped up in the last few years to ensure that AI today and in the near future will be safe and beneficial. And so we now wanted to look further ahead to artificial general intelligence (AGI), the classic goal of AI research, which promises tremendous transformation in society. Beyond mitigating risks, we want to explore how we can design AGI to help us create the best future for humanity.

We again brought together an amazing group of AI researchers from academia and industry, as well as thought leaders in economics, law, policy, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day technical workshop to look more deeply at how we can create beneficial AGI, and we followed that with a 2.5-day conference, in which people from a broader AI background considered the opportunities and challenges related to the future of AGI and steps we can take today to move toward an even better future.

This lecture was recorded on February 3, 2003 as part of the Distinguished Science Lecture Series hosted by Michael Shermer and presented by The Skeptics Society in California (1992–2015).

Can there be freedom and free will in a deterministic world? Renowned philosopher and public intellectual, Dr. Dennett, drawing on evolutionary biology, cognitive neuroscience, economics and philosophy, demonstrates that free will exists in a deterministic world for humans only, and that this gives us morality, meaning, and moral culpability. Weaving a richly detailed narrative, Dennett explains in a series of strikingly original arguments that far from being an enemy of traditional explorations of freedom, morality, and meaning, the evolutionary perspective can be an indispensable ally. In Freedom Evolves, Dennett seeks to place ethics on the foundation it deserves: a realistic, naturalistic, potentially unified vision of our place in nature.

Dr. Daniel Dennett — Freedom Evolves: Free Will, Determinism, and Evolution

Watch some of the past lectures for free online.

This mini documentary takes a look at Elon Musk and his thoughts on artificial intelligence. Giving examples on how it is being used today — from Tesla cars to Facebook, and Instagram. And what the future of artificial intelligence has in store for us — from the risks and Elon Musk’s Neuralink chip to robotics.

The video will also go over the fundamentals and basics of artificial intelligence and machine learning. Breaking down AI for beginners into simple terms — showing what is ai, along with neural nets.

Elon Musk’s Book Recommendations (Affiliate Links)
• Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/3790bU1

• Our Final Invention: Artificial Intelligence and the End of the Human Era.