There have been 4 research papers and technological advancements over the last 4 weeks that in combination drastically changed my outlook on the AGI timeline.
GPT-4 can teach itself to become better through self reflection, learn tools with minimal demonstrations, it can act as a central brain and outsource tasks to other models (HuggingGPT) and it can behave as an autonomous agent that can pursue a multi-step goal without human intervention (Auto-GPT). It is not an overstatement that there are already Sparks of AGI.
U.S. DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program recently conducted its third experiment to assess the performance of off-road unmanned vehicles. These test runs, conducted March 12–27, included the first with completely uninhabited RACER Fleet Vehicles (RFVs), with a safety operator overseeing in a supporting chase vehicle. The goal of the RACER program is to demonstrate autonomous movement of combat-scale vehicles in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions. The multiple courses were in the challenging and unforgiving terrain of the Mojave Desert at the U.S. Army’s National Training Center (NTC) in Ft. Irwin, California. As at the previous events, teams from Carnegie Mellon University, NASA’s Jet Propulsion Laboratory, and the University of Washington participated. This completed the project’s first phase.
“We provided the performers RACER fleet vehicles with common performance, sensing, and compute. This enables us to evaluate the performance of the performer team autonomy software in similar environments and compare it to human performance,” said Young. “During this latest experiment, we continued to push vehicle limits in perceiving the environments to greater distances, enabling further increase in speeds and better adaptation to newly encountered environmental conditions that will continue into RACER’s next phase.”
“At Experiment Three, we successfully demonstrated significant improvements in our off-road speeds while simultaneously reducing any interaction with the vehicle during test runs. We were also honored to have representatives from the Army and Marine Corps at the experiment to facilitate transition of technologies developed in RACER to future service unmanned initiatives and concepts,” said Stuart Young, RACER program manager in DARPA’s Tactical Technology Office.
Despite the availability of imaging-based and mass-spectrometry-based methods for spatial proteomics, a key challenge remains connecting images with single-cell-resolution protein abundance measurements. Deep Visual Proteomics (DVP), a recently introduced method, combines artificial-intelligence-driven image analysis of cellular phenotypes with automated single-cell or single-nucleus laser microdissection and ultra-high-sensitivity mass spectrometry. DVP links protein abundance to complex cellular or subcellular phenotypes while preserving spatial context.
Chat gpt 4 is wonderful but one thing it is lacking is sentience which could do all work for millions of years so essentially we would not need find all discoveries and all things by ourselves.
Autonomous agents, at this early stage, are mainly experimental. And they have some serious limitations that prevent them from getting what they want from large language models…
But that’s not true. There are concrete things regulators can do right now to prevent tech companies from releasing risky systems.
In a new report, the AI Now Institute — a research center studying the social implications of artificial intelligence — offers a roadmap that specifies exactly which steps policymakers can take. It’s refreshingly pragmatic and actionable, thanks to the government experience of authors Amba Kak and Sarah Myers West. Both former advisers to Federal Trade Commission chair Lina Khan, they focus on what regulators can realistically do today.
The big argument is that if we want to curb AI harms, we need to curb the concentration of power in Big Tech.
Let us know how we’re doing? We will be giving out discount codes for a selected number of people who fill out the survey: https://forms.gle/ArNXCmkZc6YwwyXD7
Looking to connect with your peer learners, share projects, and swap advice? Join our AI community:
Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors: - Notion: https://notion.com. - InsideTracker: https://insidetracker.com/lex to get 20% off. - Indeed: https://indeed.com/lex to get $75 credit.
OUTLINE: 0:00 — Introduction. 1:56 — Intelligent alien civilizations. 14:20 — Life 3.0 and superintelligent AI 25:47 — Open letter to pause Giant AI Experiments. 50:54 — Maintaining control. 1:19:44 — Regulation. 1:30:34 — Job automation. 1:39:48 — Elon Musk. 2:01:31 — Open source. 2:08:01 — How AI may kill all humans. 2:18:32 — Consciousness. 2:27:54 — Nuclear winter. 2:38:21 — Questions for AGI
If people are worried that Chat-GPT could be taking their jobs, they haven’t seen Auto-GPT yet.
Auto-GPT is an AI chatbot similar to ChatGPT and others. It is based on the GPT-4 language model of OpenAI, the same LLM that powers the ChatGPT. But, as the name implies, “Autonomous Artificial Intelligence Chat Generative Pre-trained Transformer,” a step further, but what exactly is it? Let us go through what Auto-GPT is and how it works.
What is Auto-GPT
Essentially, Auto-GPT is a chatbot. You ask it the questions it answers smartly. But, unlike ChatGPT and other GPT-based chatbots, which need a prompt every single time, Auto-GPT can automate the whole task, so you do not need to prompt it. Once given a task, Auto-GPT will figure out the steps on its own to reach the goal.