Toggle light / dark theme

U.S. DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program recently conducted its third experiment to assess the performance of off-road unmanned vehicles. These test runs, conducted March 12–27, included the first with completely uninhabited RACER Fleet Vehicles (RFVs), with a safety operator overseeing in a supporting chase vehicle. The goal of the RACER program is to demonstrate autonomous movement of combat-scale vehicles in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions. The multiple courses were in the challenging and unforgiving terrain of the Mojave Desert at the U.S. Army’s National Training Center (NTC) in Ft. Irwin, California. As at the previous events, teams from Carnegie Mellon University, NASA’s Jet Propulsion Laboratory, and the University of Washington participated. This completed the project’s first phase.

“We provided the performers RACER fleet vehicles with common performance, sensing, and compute. This enables us to evaluate the performance of the performer team autonomy software in similar environments and compare it to human performance,” said Young. “During this latest experiment, we continued to push vehicle limits in perceiving the environments to greater distances, enabling further increase in speeds and better adaptation to newly encountered environmental conditions that will continue into RACER’s next phase.”

“At Experiment Three, we successfully demonstrated significant improvements in our off-road speeds while simultaneously reducing any interaction with the vehicle during test runs. We were also honored to have representatives from the Army and Marine Corps at the experiment to facilitate transition of technologies developed in RACER to future service unmanned initiatives and concepts,” said Stuart Young, RACER program manager in DARPA’s Tactical Technology Office.

Despite the availability of imaging-based and mass-spectrometry-based methods for spatial proteomics, a key challenge remains connecting images with single-cell-resolution protein abundance measurements. Deep Visual Proteomics (DVP), a recently introduced method, combines artificial-intelligence-driven image analysis of cellular phenotypes with automated single-cell or single-nucleus laser microdissection and ultra-high-sensitivity mass spectrometry. DVP links protein abundance to complex cellular or subcellular phenotypes while preserving spatial context.

Chat gpt 4 is wonderful but one thing it is lacking is sentience which could do all work for millions of years so essentially we would not need find all discoveries and all things by ourselves.


Richards said he originally wanted an AI agent to automatically email him daily AI news. But, as Motherboard, he realized in the process that existing LLMs struggle with “tasks that require long-term planning,” or are “unable to autonomously refine their approaches based on real-time feedback.” That understanding inspired him to create Auto-GPT, which, he said, “can apply GPT4’s reasoning to broader, more complex problems that require long-term planning and multiple steps.” (Richards didn’t respond for a request for an interview with Fast Company.)

“They get confused”

Autonomous agents, at this early stage, are mainly experimental. And they have some serious limitations that prevent them from getting what they want from large language models…

But that’s not true. There are concrete things regulators can do right now to prevent tech companies from releasing risky systems.

In a new report, the AI Now Institute — a research center studying the social implications of artificial intelligence — offers a roadmap that specifies exactly which steps policymakers can take. It’s refreshingly pragmatic and actionable, thanks to the government experience of authors Amba Kak and Sarah Myers West. Both former advisers to Federal Trade Commission chair Lina Khan, they focus on what regulators can realistically do today.

The big argument is that if we want to curb AI harms, we need to curb the concentration of power in Big Tech.

Join us for a conversation with Andrew Ng and Yann LeCun as they discuss the proposal of a 6-month moratorium on generative AI.

We will be taking questions during the event. Please submit your question or upvote others’ here:
https://app.sli.do/event/9yGgPaweRK9Cbo8wsqV6oq/live/questions.

Speakers.
Yann LeCun, VP & Chief AI Scientist at Meta and Silver Professor at NYU
https://www.linkedin.com/in/yann-lecun/

Andrew Ng, Founder of DeepLearning. AI

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
- Notion: https://notion.com.
- InsideTracker: https://insidetracker.com/lex to get 20% off.
- Indeed: https://indeed.com/lex to get $75 credit.

EPISODE LINKS:
Max’s Twitter: https://twitter.com/tegmark.
Max’s Website: https://space.mit.edu/home/tegmark.
Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments.
Future of Life Institute: https://futureoflife.org.
Books and resources mentioned:
1. Life 3.0 (book): https://amzn.to/3UB9rXB
2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch.
3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:

If people are worried that Chat-GPT could be taking their jobs, they haven’t seen Auto-GPT yet.


Auto-GPT is an AI chatbot similar to ChatGPT and others. It is based on the GPT-4 language model of OpenAI, the same LLM that powers the ChatGPT. But, as the name implies, “Autonomous Artificial Intelligence Chat Generative Pre-trained Transformer,” a step further, but what exactly is it? Let us go through what Auto-GPT is and how it works.

What is Auto-GPT

Essentially, Auto-GPT is a chatbot. You ask it the questions it answers smartly. But, unlike ChatGPT and other GPT-based chatbots, which need a prompt every single time, Auto-GPT can automate the whole task, so you do not need to prompt it. Once given a task, Auto-GPT will figure out the steps on its own to reach the goal.

The world’s preeminent linguist has spoken — and he seems mighty tired of everyone’s whining about artificial intelligence as it stands today.

In an op-ed for the New York Times, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI’s ChatGPT and Microsoft’s Bing AI “have been hailed as the first glimmers on the horizon of artificial general intelligence” — the point at which AIs are able to think and act in ways superior to humans — we absolutely are not anywhere near that level yet.

“That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments,” the Massachusetts Institute of Technology cognitive scientist mused.