Toggle light / dark theme

A team from the University of Geneva (UNIGE) has succeeded in modeling an capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a “sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of (AI)—Natural language processing—seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to one another in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

1:05 — OpenAI board saga 18:31 — Ilya Sutskever 24:40 — Elon Musk lawsuit 34:32 — Sora 44:23 — GPT-4 55:32 — Memory & privacy 1:02:36 — Q* 1:06:12 — GPT-5 1:09:27 — $7 trillion of compute 1:17:35 — Google and Gemini…


Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors:
- Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off.
- Shopify: https://shopify.com/lex to get $1 per month trial.
- BetterHelp: https://betterhelp.com/lex to get 10% off.
- ExpressVPN: https://expressvpn.com/lexpod to get 3 months free.

TRANSCRIPT:

“More testing, a purification system and approval from the U.S. Food and Drug Administration would be needed to put the strategy to work. But insulin produced by transgenic cows could ease shortages that often make the hormone hard to come by for the 8.4 million Americans with diabetes who rely on it to survive.”


MONDAY, March 18, 2024 (HealthDay News) — There may be an unexpected fix for ongoing shortages of insulin: A brown bovine in Brazil recently made history as the first transgenic cow able to produce human insulin in her milk.

“Our research in the past decade has made analog memristor a viable technology,” said Dr. Qiangfei Xia. “It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community.”


Digital computing has become the norm in our everyday lives, but their limits are being reached in terms of computing power. Can analog computing step in and outperform them? This is what a recent study published in Science hopes to address as a team of researchers from the University of Southern California, TetraMem Inc., and the University of Massachusetts, Amherst (UMass Amherst) have spent the last decade developing memristors, which are capable of overcoming the computing limits of digital computing. This study holds the potential to help researchers develop more efficient methods in storing data without the drawbacks of holding too much of it, thus creating a clog.

“In this work, we propose and demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” said Dr. Qiangfei Xia, who is a professor of Electrical & Computer Engineering at UMass Amherst and a co-author on the study.

YouTube is now requiring creators to disclose to viewers when realistic content was made with AI, the company announced on Monday. The platform is introducing a new tool in Creator Studio that will require creators to disclose when content that viewers could mistake for a real person, place or event was created with altered or synthetic media, including generative AI.

The new disclosures are meant to prevent users from being duped into believing that a synthetically created video is real, as new generative AI tools are making it harder to differentiate between what’s real and what’s fake. The launch comes as experts have warned that AI and deepfakes will pose a notable risk during the upcoming U.S. presidential election.

Today’s announcement comes as YouTube announced back in November that it was going to roll out the update as part of a larger introduction of new AI policies.