Toggle light / dark theme

Whipping Up Worlds

Because the AI can learn from unlabeled online videos and is still a modest size—just 11 billion parameters—there’s ample opportunity to scale up. Bigger models trained on more information tend to improve dramatically. And with a growing industry focused on inference —the process of by which a trained AI performs tasks, like generating images or text—it’s likely to get faster.

DeepMind says Genie could help people, like professional developers, make video games. But like OpenAI—which believes Sora is about more than videos—the team is thinking bigger. The approach could go well beyond video games.

Saudi Arabia’s first male robot ‘Muhammad’ allegedly touched a woman inappropriately, sparking outrage on social media platforms with one user calling it a “pervert”. Recently, Muhammad was unveiled during the second edition of DeepFast in Riyadh.

Video of the incident went viral on social media forums. It shows the robot stretching its right hand toward a female reporter when she was giving a piece to the camera.

Some of the users have alleged that the movements of the robot looked intentional when the female reporter, identified as Rawya Kassem, was talking about it.

Radical Plan to Stop ‘Doomsday Glacier’ Melting to Cost $50 Billion.


Working on these next-gen intelligent AIs must be a freaky experience. As Anthropic announces the smartest model ever tested across a range of benchmarks, researchers recall a chilling moment when Claude 3 realized that it was being evaluated.

Anthropic, you may recall, was founded in 2021 by a group of senior OpenAI team members, who broke away because they didn’t agree with OpenAI’s decision to work closely with Microsoft. The company’s Claude and Claude 2 AIs have been competitive with GPT models, but neither Anthropic nor Claude have really broken through into public awareness.

That could well change with Claude 3, since Anthropic now claims to have surpassed GPT-4 and Google’s Gemini 1.0 model on a range of multimodal tests, setting new industry benchmarks “across a wide range of cognitive tasks.”

A group of Tohoku University researchers has developed a theoretical model for a high-performance spin wave reservoir computing (RC) that utilizes spintronics technology. The breakthrough moves scientists closer to realizing energy-efficient, nanoscale computing with unparalleled computational power.

Details of their findings were published in npj Spintronics on March 1, 2024.

The brain is the ultimate computer, and scientists are constantly striving to create neuromorphic devices that mimic the brain’s processing capabilities, , and its ability to adapt to neural networks. The development of neuromorphic computing is revolutionary, allowing scientists to explore nanoscale realms, GHz speed, with low energy consumption.

The new hardware reimagines AI chips for modern workloads and can run powerful AI systems using much less energy than today’s most advanced semiconductors, according to Naveen Verma, professor of electrical and computer engineering. Verma, who will lead the project, said the advances break through key barriers that have stymied chips for AI, including size, efficiency and scalability.

Chips that require less energy can be deployed to run AI in more dynamic environments, from laptops and phones to hospitals and highways to low-Earth orbit and beyond. The kinds of chips that power today’s most advanced models are too bulky and inefficient to run on small devices, and are primarily constrained to server racks and large data centers.

Now, the Defense Advanced Research Projects Agency, or DARPA, has announced it will support Verma’s work, based on a suite of key inventions from his lab, with an $18.6 million grant. The DARPA funding will drive an exploration into how fast, compact and power-efficient the new chip can get.

The rapid advancement of deep learning algorithms and generative models has enabled the automated production of increasingly striking AI-generated artistic content. Most of this AI-generated art, however, is created by algorithms and computational models, rather than by physical robots.

Researchers at Universidad Complutense de Madrid (UCM) and Universidad Carlos III de Madrid (UC3M) recently developed a deep learning-based model that allows a humanoid robot to sketch pictures, similarly to how a human artist would. Their paper, published in Cognitive Systems Research, offers a remarkable demonstration of how robots could actively engage in creative processes.

“Our idea was to propose a robot application that could attract the scientific community and the general public,” Raúl Fernandez-Fernandez, co-author of the paper, told Tech Xplore. “We thought about a task that could be shocking to see a robot performing, and that was how the concept of doing art with a humanoid robot came to us.”