Toggle light / dark theme

A newly created real-life Transformer is capable of reconfiguring its body to achieve eight distinct types of motion and can autonomously assess the environment it faces to choose the most effective combination of motions to maneuver.

The new , dubbed M4 (for Multi-Modal Mobility Morphobot) can roll on four wheels, turn its wheels into rotors and fly, stand on two wheels like a meerkat to peer over obstacles, “walk” by using its wheels like feet, use two rotors to help it roll up on two wheels, tumble, and more.

A robot with such a broad set of capabilities would have applications ranging from the transport of injured people to a hospital to the exploration of other planets, says Mory Gharib (Ph. D. ‘83), the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering and director of Caltech’s Center for Autonomous Systems and Technologies (CAST), where the robot was developed.

AI-powered augmented reality devices will give human beings ‘superpowers’ to detect lies and ‘read’ emotions of people they are talking to, a futurist has claimed.

Speaking exclusively to DailyMail.com, Devin Liddell, Principal Futurist at Teague, said that computer vision systems built into headsets or glasses will pick up emotional cues that un-augmented human eyes and instincts cannot see.

The technology would let people know if their date is lying or is sexually aroused, along with spotting a lying politician.

And so it begins. I’ve seen one job already on glass door that requires knowledge of AI and I only barely started looking. I wasn’t even specifically looking for AI jobs. I’ve seen other articles where ChatGPT can be used to make thousands in side hustles. So far, so good. I’ll have to check out those job hustles and see if I can make use of those articles. Just one job is enough for me. One article claimed some jobs will pay you as much as 800k if you know AI.


Generative artificial intelligence is all the rage now but the AI boom is not just all hype, said Dan Ives from Wedbush Securities, who calls it the “fourth industrial revolution playing out.”

“This is something I call a 1995 moment, parallel with the internet. I do not believe that this is a hype cycle,” the managing director and senior equity research analyst told CNBC’s “Squawk Box Asia” on Wednesday.

The fourth industrial revolution refers to how technological advancements like artificial intelligence, autonomous vehicles and the internet of things are changing the way humans live, work and relate to one another.

While autonomous robots have started to move out of the lab and into the real world, they remain fragile. Slight changes in the environment or lighting conditions can easily throw off the AI that controls them, and these models have to be extensively trained on specific hardware configurations before they can carry out useful tasks.

This lies in stark contrast to the latest LLMs, which have proven adept at generalizing their skills to a broad range of tasks, often in unfamiliar contexts. That’s prompted growing interest in seeing whether the underlying technology—an architecture known as a transformer—could lead to breakthroughs in robotics.

In new results, researchers at DeepMind showed that a transformer-based AI called RoboCat can not only learn a wide range of skills, it can also readily switch between different robotic bodies and pick up new skills much faster than normal. Perhaps most significantly, it’s able to accelerate its learning by generating its own training data.

When an animal takes notice of an approaching figure, it needs to determine what it is, and quickly. In nature, competition and survival dictate that it’s better to think fast—that is, for the brain to prioritize processing speed over accuracy. A new study shows that this survival principle may already be wired in the way the brain processes sensory information.

Kumar and fellow KTH neuroscientist Pawel Herman collaborated with KTH information theorists Movitz Lenninger and Mikael Skoglund to study input processing in the using and computer models of the brain. Neuroscientist Arvind Kumar, an associate professor at KTH Royal Institute of Technology, says that the study offers a new view of neural coding of different types of inputs in the brain.

The new study surprisingly shows that initial visual processing is “quick but sloppy” in comparison to information processing in other parts of the brain’s vast neural network, where accuracy is prioritized over speed. The paper is published in the journal eLife.

Cofounder of Zeda.io—helping product leaders discover the right problems to solve and build around. And I love traveling, with my dog!

Artificial intelligence (AI) is transforming the world as we know it, and product management is no exception. It has the potential to revolutionize customer research, decision-making and much more, providing us with data-driven insights and paving the way for a future that is not only intelligent but intuitive.

With AI at our fingertips, we’re standing at the threshold of a new era in product management. However, integrating AI into product management also presents challenges that must be addressed. We will delve into how AI influences the world of product management and what it holds for the future.

Andreessen argues that thanks to A.I., “productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.”

This week, on the Lex Fridman Podcast, he offered advice to young people who want to stand out in what he describes in this “freeze-frame moment” with A.I.—where tools like ChatGPT and GPT-4 are suddenly available and “everybody is kind of staring at them wondering what to do.”

He noted that we’re now living in a world where vast amounts of information are at our fingertips and, with A.I. tools, “your ability both to learn and to produce” is dramatically higher than in the past. Such tools should allow for more “hyper-productive people” to emerge, he said. For example, there’s no reason authors and musicians couldn’t churn out far more books or songs than was customary in the past.

Huge libraries of drug compounds may hold potential treatments for a variety of diseases, such as cancer or heart disease. Ideally, scientists would like to experimentally test each of these compounds against all possible targets, but doing that kind of screen is prohibitively time-consuming.

In recent years, researchers have begun using computational methods to screen those libraries in hopes of speeding up drug discovery. However, many of those methods also take a long time, as most of them calculate each target protein’s three-dimensional structure from its amino-acid.

Researchers in Japan and Australia have developed a new multicore optic fiber able to transmit a record-breaking 1.7 petabits per second, while maintaining compatibility with existing fiber infrastructure. The team–from Japan’s National Institute of Information and Communications Technology (NICT) and Sumitomo Electric Industries, and Macquarie University in Sydney, Australia—achieved the feat using a fiber with 19 cores. That’s the largest number of cores packed into a cable with a standard cladding diameter of 0.125 micrometers.

“We believe 19 cores is the highest practical number of cores or spatial channels you can have in standard cladding diameter fiber and still maintain good quality transmission,” says Georg Rademacher, who previously headed the project for NICT but who has recently returned to Germany to take up a directorship in optical communications at the University of Stuttgart.

Most fiber cables for long-distance transmission in use today are single core, single-mode glass fibers (SMF). But SMF is approaching its practical limit as network traffic rapidly increases because of AI, cloud computing, and IoT applications. Many researchers are therefore taking an interest in multicore fiber in conjunction with space-division multiplexing (SDM), a transmission technique for using multiple spatial channels in a cable.

Some of the world’s leading human and robot minds are heading to the United Nations.

At a UN summit in Geneva next week, tech luminaries ranging from futurist Ray Kurzweil to DeepMind COO Lila Ibrahim will discuss AI for good. It’s a stellar lineup of speakers, but the real stars in our eyes are the robots.

Over 50 of the beasts — the majority from Europe — will be in attendance. All of them merit places in your dreams and nightmares, but we’ve narrowed the roster down to a list of our 10 favourites.