May 15, 2024
Tesla’s FSD 12.4 Update: 5-10x Improvement in Autonomous Driving Technology
Posted by Chris Smedley in categories: Elon Musk, robotics/AI, transportation
Brighter with Herbert.
Brighter with Herbert.
AI chip that mimics the human brain year 2023.
Designing efficient in-memory-computing architectures remains a challenge. Here the authors develop a multi-level FeFET crossbar for multi-bit MAC operations encoded in activation time and accumulated current with experimental validation at 28nm achieving 96.6% accuracy and high performance of 885 TOPS/W.
Artificial general intelligence through an AI photonic chip face_with_colon_three
The pursuit of artificial general intelligence (AGI) continuously demands higher computing performance. Despite the superior processing speed and efficiency of integrated photonic circuits, their capacity and scalability are restricted by unavoidable errors, such that only simple tasks and shallow models are realized. To support modern AGIs, we designed Taichi—large-scale photonic chiplets based on an integrated diffractive-interference hybrid design and a general distributed computing architecture that has millions-of-neurons capability with 160–tera-operations per second per watt (TOPS/W) energy efficiency. Taichi experimentally achieved on-chip 1000-category–level classification (testing at 91.89% accuracy in the 1623-category Omniglot dataset) and high-fidelity artificial intelligence–generated content with up to two orders of magnitude of improvement in efficiency.
Jan Leike, co-leading Superalignment of GPT4o, posted on X, “I resigned,” just the day before the GPT4o launch.
OpenAI co-founder Ilya Sutskever and another executive leave the company as GPT-4o launches.
OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
On the day of the ChatGPT-4o announcement, Sam Altman sat down to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.
(00:00) Intro.
(00:50) The Personal Impact of Leading OpenAI
(01:44) Unveiling Multimodal AI: A Leap in Technology.
(02:47) The Surprising Use Cases and Benefits of Multimodal AI
(03:23) Behind the Scenes: Making Multimodal AI Possible.
(08:36) Envisioning the Future of AI in Communication and Creativity.
(10:21) The Business of AI: Monetization, Open Source, and Future Directions.
(16:42) AI’s Role in Shaping Future Jobs and Experiences.
(20:29) Debunking AGI: A Continuous Journey Towards Advanced AI
(24:04) Exploring the Pace of Scientific and Technological Progress.
(24:18) The Importance of Interpretability in AI
(25:11) Navigating AI Ethics and Regulation.
(27:26) The Safety Paradigm in AI and Beyond.
(28:55) Personal Reflections and the Impact of AI on Society.
(29:11) The Future of AI: Fast Takeoff Scenarios and Societal Changes.
(30:59) Navigating Personal and Professional Challenges.
(40:21) The Role of AI in Creative and Personal Identity.
(43:09) Educational System Adaptations for the AI Era.
(44:30) Contemplating the Future with Advanced AI
Continue reading “Sam Altman talks GPT-4o and Predicts the Future of AI” »
Yet another OpenAI executive has been caught lacking on camera when asked if the company’s new Sora video generator was trained using YouTube videos.
During a recent talk at Bloomberg’s Tech Summit in San Francisco, OpenAI chief operating officer Brad Lightcap went off on a word vomit-style monologue in the wrong direction in an attempt to deflect from questions about Sora’s training data.
Continue reading “Another OpenAI Executive Choked When Asked If Sora Was Trained on YouTube Data” »
Researchers have leveraged deep learning techniques to enhance the image quality of a metalens camera. The new approach uses artificial intelligence to turn low-quality images into high-quality ones, which could make these cameras viable for a multitude of imaging tasks including intricate microscopy applications and mobile devices.
“It is nonsensical to say that an LLM has feelings,” Hagendorff says. “It is nonsensical to say that it is self-aware or that it has intentions. But I don’t think it is nonsensical to say that these machines are able to learn or to deceive.”
Brain scans
Other researchers are taking tips from neuroscience to explore the inner workings of LLMs. To examine how chatbots deceive, Andy Zou, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and his collaborators interrogated LLMs and looked at the activation of their ‘neurons’. “What we do here is similar to performing a neuroimaging scan for humans,” Zou says. It’s also a bit like designing a lie detector.
For decades, philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford) has led the conversation around technology and human experience (and grabbed the attention of the tech titans who are developing AI – Bill Gates, Elon Musk, and Sam Altman).
Now, a decade after his NY Times bestseller S uperintelligence warned us of what could go wrong with AI development, he flips the script in his new book Deep Utopia: Life and Meaning in a Solved World (March 27), asking us to instead consider “What could go well?”
Ronan recently spoke to Professor Nick Bostrom.