Toggle light / dark theme

“An Unscientific American” discusses the resignation of Laura Helmuth from her position as editor-in-chief at Scientific American. The author, Michael Shermer, argues that her departure exemplifies the risks of blending facts with ideology in scientific communication.

Helmuth faced backlash after posting controversial remarks on social media regarding political views, which led to public criticism and her eventual resignation. Shermer reflects on how the magazine’s editorial direction has shifted towards progressive ideology, suggesting this has compromised its scientific integrity. He notes that had Helmuth made disparaging comments about liberal viewpoints, her outcome would likely have been more severe.

The article critiques Scientific American for endorsing positions on gender and race that Shermer sees as ideologically driven rather than based on scientific evidence. He expresses concern that such ideological capture within scientific publications can distort facts and undermine credibility.

For more details, you can read the full article here.

About the Author.
Michael Shermer is a prominent science writer and the founder of the Skeptics Society. He is known for his work promoting scientific skepticism and questioning pseudoscience. Shermer is also the author of several books on belief, morality, and the nature of science, including The Believing Brain and The Moral Arc.
https://quillette.com/2024/11/21/an-unscientific-american-sc…signation/

Quillette is an Australian-based online magazine that focuses on long-form analysis and cultural commentary. It is politically non-partisan, but relies on reason, science, and humanism as its guiding values.

Quillette was founded in 2015 by Australian writer Claire Lehmann. It is a platform for free thought and a space for open discussion and debate on a wide range of topics, including politics, culture, science, and technology.

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy.

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

—————–Support the channel———–
Patreon: https://www.patreon.com/thedissenter.
PayPal: paypal.me/thedissenter.

——————Follow me on——————–
Facebook: https://www.facebook.com/thedissenteryt/
Twitter: https://twitter.com/TheDissenterYT

Dr. Alexander Rosenberg is the R. Taylor Cole Professor of Philosophy at Duke University. He has been a visiting professor and fellow at the Center for the Philosophy of Science, at the University of Minnesota, as well as the University of California, Santa Cruz, and Oxford University, and a visiting fellow of the Philosophy Department at the Research School of Social Science, of the Australian National University. In 2016 he was the Benjamin Meaker Visiting Professor at the University of Bristol. He has held fellowships from the National Science Foundation, the American Council of Learned Societies, and the John Simon Guggenheim Foundation. In 1993, Dr. Rosenberg received the Lakatos Award in the philosophy of science. In 2006–2007 he held a fellowship at the National Humanities Center. He was also the Phi Beta Kappa-Romanell Lecturer for 2006–2007. He’s the author of both fictional and non-fictional literature, including The Atheist’s Guide to Reality, The Girl from Krakow, and How History Gets Things Wrong.
In this episode, we focus on Dr. Rosenberg’s most recent book, How History Gets Things Wrong, and also a little bit on some of the topics of The Atheist’s Guide to Reality. We talk about the theory of mind, and how it evolved; the errors with narrative History, and the negative consequences it might produce; mind-brain dualism; what neuroscience tells us about how our brain and cognition operate; social science, biology, and evolution; the role that evolutionary game theory can play in explaining historical events and social phenomena; why beliefs, motivations, desires, and other mental constructs might not exist at all, and the implications for moral philosophy; if AI could develop these same illusions; and nihilism.

Time Links:
01:17 What is theory of mind, and how did it evolve?
06:16 The problem with narrative History.
08:17 Is theory of mind problematic in modern societies?
11:41 The issue with mind-brain dualism.
13:23 The concept of “aboutness”
15:36 Neuroscience, and no content in the brain.
22.21 What “causes” historical events?
28:09 Why the social sciences need more biology and evolution.
37:13 Evolutionary game theory, and understanding social phenomena.
41:06 The implications for moral philosophy of not having beliefs.
44:34 About “moral progress”
47:41 The usefulness of thought experiments in Philosophy.
49:58 The theory of mind will not be going away anytime soon.
51:37 Could AI systems have these same cognitive illusions?
53:13 A note on nihilism and morality.
57:38 Follow Dr. Rosenberg’s work!

Follow Dr. Rosenberg’s work:
Faculty page: https://tinyurl.com/ydby3b5f.
Website: http://www.alexrose46.com/
Books: https://tinyurl.com/yag2n2fn.

A HUGE THANK YOU TO MY PATRONS: KARIN LIETZCKE, ANN BLANCHETTE, BRENDON J. BREWER, JUNOS, SCIMED, PER HELGE HAAKSTD LARSEN, LAU GUERREIRO, RUI BELEZA, MIGUEL ESTRADA, ANTÓNIO CUNHA, CHANTEL GELINAS, JIM FRANK, AND JERRY MULLER!

I also leave you with the link to a recent montage video I did with the interviews I have released until the end of June 2018:
https://youtu.be/efdb18WdZUo.

And check out my playlists on:

The 3 Body Problem Explored: Cosmic Sociology, Longtermism & Existential Risk — round table discussion with three great minds: Robin Hanson, Anders Sandberg and Joscha Bach — moderated by Adam Ford (SciFuture) and James Hughes (IEET).

Some of the items discussed:
- How can narratives that keep people engaged avoid falling short of being realistic?
- In what ways is AI superintelligence kept of stage to allow a narrative that is familiar and easier to make sense of?
- Differences in moral perspectives — moral realism, existentialism and anti-realism.
- Will values of advanced civilisations converge to a small number of possibilities, or will they vary greatly?
- How much will competition be the dominant dynamic in the future, compared to co-ordination?
- In a competitive dynamic, will defense or offense be the most dominant strategy?

Many thanks for tuning in!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk/

Kind regards.
Adam Ford.
- Science, Technology & the Future — #SciFuture — http://scifuture.org

Michael Levin is a Distinguished Professor in the Biology department at Tufts University and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University. @drmichaellevin holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Prior to college, Michael Levin worked as a software engineer and independent contractor in the field of scientific computing. He attended Tufts University, interested in artificial intelligence and unconventional computation. To explore the algorithms by which the biological world implemented complex adaptive behavior, he got dual B.S. degrees, in CS and in Biology and then received a PhD from Harvard University. He did post-doctoral training at Harvard Medical School, where he began to uncover a new bioelectric language by which cells coordinate their activity during embryogenesis. His independent laboratory develops new molecular-genetic and conceptual tools to probe large-scale information processing in regeneration, embryogenesis, and cancer suppression.

TIMESTAMPS:
0:00 — Introduction.
1:41 — Creating High-level General Intelligences.
7:00 — Ethical implications of Diverse Intelligence beyond AI & LLMs.
10:30 — Solving the Fundamental Paradox that faces all Species.
15:00 — Evolution creates Problem Solving Agents & the Self is a Dynamical Construct.
23:00 — Mike on Stephen Grossberg.
26:20 — A Formal Definition of Diverse Intelligence (DI)
30:50 — Intimate relationships with AI? Importance of Cognitive Light Cones.
38:00 — Cyborgs, hybrids, chimeras, & a new concept called “Synthbiosis“
45:51 — Importance of the symbiotic relationship between Science & Philosophy.
53:00 — The Space of Possible Minds.
58:30 — Is Mike Playing God?
1:02:45 — A path forward: through the ethics filter for civilization.
1:09:00 — Mike on Daniel Dennett (RIP)
1:14:02 — An Ethical Synthbiosis that goes beyond “are you real or faking it“
1:25:47 — Conclusion.

EPISODE LINKS:
- Mike’s Round 1: https://youtu.be/v6gp-ORTBlU
- Mike’s Round 2: https://youtu.be/kMxTS7eKkNM
- Mike’s Channel: https://www.youtube.com/@drmichaellevin.
- Mike’s Website: https://drmichaellevin.org/
- Blog Website: https://thoughtforms.life.
- Mike’s Twitter: https://twitter.com/drmichaellevin.
- Mike’s Publications: https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en.
- Mike’s NOEMA piece: https://www.noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence/
- Stephen Grossberg: https://youtu.be/bcV1eSgByzg.
- Mark Solms: https://youtu.be/rkbeaxjAZm4
- VPRO Roundtable: https://youtu.be/RVrnn7QW6Jg?feature=shared.

CONNECT:
- Website: https://tevinnaidu.com.
- Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu.
- Twitter: https://twitter.com/drtevinnaidu.
- Facebook: https://www.facebook.com/drtevinnaidu.
- Instagram: https://www.instagram.com/drtevinnaidu.
- LinkedIn: https://www.linkedin.com/in/drtevinnaidu.

Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.

#MichaelLevin #DiverseIntelligence #AI #Mind

R.I.P. Phil Philip George Zimbardo. March 23, 1933 – October 14, 2024.

“Success is not about reaching a destination; it’s about the journey and the person you become along the way.”


Philip G. Zimbardo, one of the world’s most renowned psychologists, died Oct. 14 in his home in San Francisco. He was 91.

Broadly, Zimbardo’s research explored how environments influence behavior. He is most known for his controversial 1971 study, the Stanford Prison Experiment, with W. Curtis Banks, Craig Haney, and David Jaffe. The study, intended to examine the psychological experiences of imprisonment, revealed the shocking extent to which circumstances can alter individual behavior. To this day, it is used as a case study in psychology classes to highlight both the psychology of evil as well as the ethics of doing psychological research with human subjects.

Yet Zimbardo’s research went far beyond the prison experiment. In a career that spanned over five decades, Zimbardo examined topics including persuasion, attitude change, cognitive dissonance, hypnosis, cults, alienation, shyness, time perspective, altruism, and compassion.

The pace of engineering and science is speeding up, rapidly leading us toward a “Technological Singularity” — a point in time when superintelligent machines achieve and improve so much so fast, traditional humans can no longer operate at the forefront. However, if all goes well, human beings may still flourish greatly in their own ways in this unprecedented era.

If humanity is going to not only survive but prosper as the Singularity unfolds, we will need to understand that the Technological Singularity is an Experiential Singularity as well, and rapidly evolve not only our technology but our level of compassion, ethics and consciousness.

The aim of The Consciousness Explosion is to help curious and open-minded readers wrap their brains around these dramatic emerging changes– and empower readers with tools to cope and thrive as they unfold.