Toggle light / dark theme

Stealing fire from the gods: Artificial Intelligence and the evolution of thought

The feeling that we belong to something much larger and deeper than ourselves has long been a common human experience. Palaeontologist and Jesuit priest Teilhard de Chardin wrote about “a noosphere” of cognitive realisation evolving towards an “Omega point” of divine planetary spiritualisation. But it is hard to envisage that ever occurring. It is easier to envisage that we belong in an evolving intelligent power that has entered a momentous posthuman dimension though artificial intelligence.

Some futurists believe we are on the way to realising a posthuman world in which we will live on as cyborgs, or in some new embodiment of intelligent power that will absorb and supersede human intelligence. It is no longer fanciful to foresee a future in which we will have everyday interactions with androids that are powered by artificial general intelligence. They will look, move, and seem to think and respond like a human person, be skilled in simulating emotional responses realistically, and greatly out-perform us in mental activities and manual tasks. It may be we will regard them only as tools or mechanical assistants. But from their expression of human-like behaviours we may become attached to them, even to the extent of according them rights. Their design will have to ensure they don’t carry any threat, but will we be able to trust fully that this will remain the case given their technical superiority? And how far can we trust that the military, malicious groups, and rogue states won’t develop androids trained to kill people and destroy property? We know only too well about our human propensity for violent conflict.

It would be ironic if, to gain more power and control over the world, we used our human intelligence to create AI systems and devices which, for all the benefits they bring, end up managing our lives to our detriment, or even controlling us. And irony, as Greek dramatists were well aware, is often a component of fate.

New Era of Soft Robotics Inspired by Octopus-Like Sensory Capabilities

SUMMARY: A soft robot with octopus-inspired sensory and motion capabilities represents significant progress in robotics, offering nimbleness and adaptability in uncertain environments.

Robotic engineers have made a leap forward with the development of a soft robot that closely resembles the dynamic movements and sensory prowess of an octopus. This groundbreaking innovation from an international collaboration involving Beihang University, Tsinghua University, and the National University of Singapore has the potential to redefine how robots interact with the world around them.

The blueprint for this highly adaptable robot draws upon the intelligent, soft-bodied mechanics of an octopus, enabling smooth movements across a variety of surfaces and environments with precision. The sensorized soft arm, lovingly named the electronics-integrated soft octopus arm mimic (E-SOAM), embodies advancements in soft robotics with its incorporation of elastic materials and sophisticated liquid metal circuits that remain resilient under extreme deformation.

Liquid AI, a new MIT spinoff, wants to build an entirely new type of AI

An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose AI systems powered by a relatively new type of AI model called a liquid neural network.

The spinoff, aptly named Liquid AI, emerged from stealth this morning and announced that it has raised $37.5 million — substantial for a two-stage seed round — from VCs and organizations including OSS Capital, PagsGroup, WordPress parent company Automattic, Samsung Next, Bold Capital Partners and ISAI Cap Venture, as well as angel investors like GitHub co-founder Tom Preston Werner, Shopify co-founder Tobias Lütke and Red Hat co-founder Bob Young.

The tranche values Liquid AI at $303 million post-money.

Introducing Ego-Exo4D: A foundational dataset for research on video learning and multimodal perception

📸 Watch this video on Facebook https://www.facebook.com/share/v/NNeZinMSuGPtQDXL/?mibextid=i5uVoL


Working together as a consortium, FAIR or university partners captured these perspectives with the help of more than 800 skilled participants in the United States, Japan, Colombia, Singapore, India, and Canada. In December, the consortium will open source the data (including more than 1,400 hours of video) and annotations for novel benchmark tasks. Additional details about the datasets can be found in our technical paper. Next year, we plan to host a first public benchmark challenge and release baseline models for ego-exo understanding. Each university partner followed their own formal review processes to establish the standards for collection, management, informed consent, and a license agreement prescribing proper use. Each member also followed the Project Aria Community Research Guidelines. With this release, we aim to provide the tools the broader research community needs to explore ego-exo video, multimodal activity recognition, and beyond.

How Ego-Exo4D works.

Ego-Exo4D focuses on skilled human activities, such as playing sports, music, cooking, dancing, and bike repair. Advances in AI understanding of human skill in video could facilitate many applications. For example, in future augmented reality (AR) systems, a person wearing smart glasses could quickly pick up new skills with a virtual AI coach that guides them through a how-to video; in robot learning, a robot watching people in its environment could acquire new dexterous manipulation skills with less physical experience; in social networks, new communities could form based on how people share their expertise and complementary skills in video.

/* */