Toggle light / dark theme

Artificial general intelligence through an AI photonic chip face_with_colon_three


The pursuit of artificial general intelligence (AGI) continuously demands higher computing performance. Despite the superior processing speed and efficiency of integrated photonic circuits, their capacity and scalability are restricted by unavoidable errors, such that only simple tasks and shallow models are realized. To support modern AGIs, we designed Taichi—large-scale photonic chiplets based on an integrated diffractive-interference hybrid design and a general distributed computing architecture that has millions-of-neurons capability with 160–tera-operations per second per watt (TOPS/W) energy efficiency. Taichi experimentally achieved on-chip 1000-category–level classification (testing at 91.89% accuracy in the 1623-category Omniglot dataset) and high-fidelity artificial intelligence–generated content with up to two orders of magnitude of improvement in efficiency.

On the day of the ChatGPT-4o announcement, Sam Altman sat down to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.

(00:00) Intro.
(00:50) The Personal Impact of Leading OpenAI
(01:44) Unveiling Multimodal AI: A Leap in Technology.
(02:47) The Surprising Use Cases and Benefits of Multimodal AI
(03:23) Behind the Scenes: Making Multimodal AI Possible.
(08:36) Envisioning the Future of AI in Communication and Creativity.
(10:21) The Business of AI: Monetization, Open Source, and Future Directions.
(16:42) AI’s Role in Shaping Future Jobs and Experiences.
(20:29) Debunking AGI: A Continuous Journey Towards Advanced AI
(24:04) Exploring the Pace of Scientific and Technological Progress.
(24:18) The Importance of Interpretability in AI
(25:11) Navigating AI Ethics and Regulation.
(27:26) The Safety Paradigm in AI and Beyond.
(28:55) Personal Reflections and the Impact of AI on Society.
(29:11) The Future of AI: Fast Takeoff Scenarios and Societal Changes.
(30:59) Navigating Personal and Professional Challenges.
(40:21) The Role of AI in Creative and Personal Identity.
(43:09) Educational System Adaptations for the AI Era.
(44:30) Contemplating the Future with Advanced AI

Executive Producer: Rashad Assir.
Producer: Leah Clapper.
Mixing and editing: Justin Hrabovsky.

Check out Unsupervised Learning, Redpoint’s AI Podcast: / @redpointai.

Yet another OpenAI executive has been caught lacking on camera when asked if the company’s new Sora video generator was trained using YouTube videos.

During a recent talk at Bloomberg’s Tech Summit in San Francisco, OpenAI chief operating officer Brad Lightcap went off on a word vomit-style monologue in the wrong direction in an attempt to deflect from questions about Sora’s training data.

“Can you say, and clear up once and for all, whether Sora was trained on YouTube data?” Bloomberg’s Shirin Ghaffary asked the COO, prompting a wordy non-response.