Toggle light / dark theme

‘Mind-reading’ AI: Japan study sparks ethical debate

Tokyo, Japan – Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy’”.

Researchers Studied a Circadian Clock in Real Time in a First For Science

Large language models are drafting screenplays and writing code and cracking jokes. Image generators, such as Midjourney and DALL-E 2, are winning art prizes and democratizing interior design and producing dangerously convincing fabrications. They feel like magic. Meanwhile, the world’s most advanced robots are still struggling to open different kinds of doors. As in actual, physical doors. Chatbots, in the proper context, can be—and have been—mistaken for actual human beings; the most advanced robots still look more like mechanical arms appended to rolling tables. For now, at least, our dystopian near future looks a lot more like Her than M3GAN.

The counterintuitive notion that it’s harder to build artificial bodies than artificial minds is not a new one. In 1988, the computer scientist Hans Moravec observed that computers already excelled at tasks that humans tended to think of as complicated or difficult (math, chess, IQ tests) but were unable to match “the skills of a one-year-old when it comes to perception and mobility.” Six years later, the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of AI research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”

Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky.
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Follow me on Twitter: https://twitter.com/dwarkesh_sp.

Timestamps:

The robots are already here

Actuator: What’s this ‘general purpose’ stuff I keep hearing about?


In a blog post published last week, Meta asks, “Where are the robots?” The answer is simple. They’re here. You just need to know where to look. It’s a frustrating answer. I recognize that. Let’s set aside conversations about cars and driver assistance and just focus on things we all tend to agree are robots. For starters, that Amazon delivery isn’t making it to you without robotic assistance.

A more pertinent question would be: Why aren’t there more robots? And more to the point, why aren’t there more robots in my house right now? It’s a complex question with a lot of nuance — much of it coming down to the current state of hardware limitations around the concept of a “general purpose” robot. Roomba is a robot. There are a lot of Roombas in the world, and that’s largely because Roombas do one thing well (an additional decade of R&D has helped advance things from a state of “pretty good”).

It’s not so much that the premise of the question is flawed — it’s more a question of reframing it slightly. “Why aren’t there more robots?” is a perfectly valid question for a nonroboticist to ask. As a longtime hardware person, I usually start my answer there. I’ve had enough conversations over the past decade that I feel fairly confident I could monopolize the entire conversation discussing the many potential points of failure with a robot gripper.

Messenger adds multiplayer games you can play during video calls

Facebook Gaming, a division of Meta, has announced that you can now play games during video calls on Messenger. At launch, there are 14 free-to-play game available in Messenger video calls on iOS, Android and the web. The games include popular titles like Words With Friends, Card Wars, Exploding Kittens and Mini Gold FRVR.

To access the games, you need to start a video call on Messenger and tap the group mode button in the center, then tap on the “Play” icon. From there, you can browse through the games library. The company notes that there must be two or more people in your call to play games.

“Facebook Gaming is excited to announce that you can now play your favorite games during video calls on Messenger,” the company wrote in a blog post. “This new, shared experience in Messenger makes it easy to play games with friends and family while in a video call, allowing you to deepen connections with friends and family by engaging in conversations and gameplay at the same time.”

Mom, Dad, I Want To Be A Prompt Engineer

A new career is emerging with the spread of generative AI applications like ChatGPT: prompt engineering, the art (not science) of crafting effective instructions for AI models.

“In ten years, half of the world’s jobs will be in prompt engineering,” declared Robin Li, cofounder and CEO of Chinese AI giant, Baidu. “And those who cannot write prompts will be obsolete.”

That may be a bit of big tech hyperbole, but there’s no doubt that prompt engineers will become the wizards of the AI world, coaxing and guiding AI models into generating content that is not only relevant but also coherent and consistent with the desired output.

Meta’s New AI Tool Makes It Easier For Researchers To Analyze Photos

The announcement comes as the social media giant increasingly diverts its attention from creating a virtual reality-based Metaverse to embed AI features across its platforms like Instagram, Facebook, Messenger and WhatsApp.

Editing photos, analyzing surveillance footage and understanding the parts of a cell. These tasks have one thing in common: you need to be able to identify and separate different objects within an image. Traditionally, researchers have had to start from scratch each time they want to analyze a new part of an image.

Meta aims to change this laborious process by being the one-stop-shop for researchers and web developers working on such problems.

AI-generated music ‘inferior to human-composed’ work, finds study

Artificial intelligence has become the world’s latest buzzword. And experts have been busy demonstrating its capabilities in virtually every field, including music. And it appears that AI did not fare well in the generation of music.

They recruited 50 participants for this study who have a strong understanding of music, particularly musical notes and other essential components.


Puhimec/iStock.

According to the University of York study, AI-generated music is “inferior to human-composed music.”

/* */