Toggle light / dark theme

Renowned researchers David Chalmers and Anil Seth join Brian Greene to explore how far science and philosophy have gone toward explaining the greatest of all mysteries, consciousness–and whether artificially intelligent systems may one day possess it.

This program is part of the Big Ideas series, supported by the John Templeton Foundation.

Participants:
David Chalmers.
Anil Seth.

Moderator:

Imagine a world where machines not only understand our words, but grasp the nuances of our emotions, anticipate our needs, and even surpass our own intelligence. This is the dream, and perhaps the near reality, of Artificial General Intelligence (AGI).

For many years, the idea of achieving AGI (Artificial General Intelligence) has only existed in the realm of science fiction. It’s been seen as a futuristic utopia where machines can seamlessly integrate into our lives. However, this perception is changing. Advances in AI technology are blurring the lines between fiction and reality, leading to both excitement and apprehension regarding its potential impact on society.

In this blog post, we’ll embark on a journey to explore the fascinating world of AGI. We’ll peek into the current state of AI and the significant innovations that are inching us toward AGI.

This post is also available in: עברית (Hebrew)

Experts from the Universities of East Anglia, Sheffield, and Leeds have developed a new groundbreaking AI method that improves the accuracy and efficiency of analyzing MRI heart scans. This innovation could provide a way for faster, more accurate, and non-invasive diagnosis of heart failure and other cardiac conditions, thus saving valuable time and resources for the healthcare sector.

According to Innovation News Network, the research team used data from 814 patients at Sheffield and Leeds Teaching Hospitals to train an AI model, which was then tested using scans and data from 101 patients at Norfolk and Norwich University Hospitals to ensure accuracy.

Large language models trained on religious texts claim to offer spiritual insights on demand. What could go wrong?

By Webb Wright

Just before midnight on the first day of Ramadan last year, Raihan Khan—a 20-year-old Muslim student living in Kolkata—announced in a LinkedIn post that he had launched QuranGPT, an artificial-intelligence-powered chatbot he had designed to answer questions and provide advice based on Islam’s holiest text. Then he went to sleep. He awoke seven hours later to find it had crashed because of an overflow of traffic. A lot of the comments were positive, but others were not. Some were flat-out threatening.