Explore the dynamic interplay between the human brain and AI. Uncover the complexities and contrasts shaping the future of intelligence.
SUMMARY: A soft robot with octopus-inspired sensory and motion capabilities represents significant progress in robotics, offering nimbleness and adaptability in uncertain environments.
Robotic engineers have made a leap forward with the development of a soft robot that closely resembles the dynamic movements and sensory prowess of an octopus. This groundbreaking innovation from an international collaboration involving Beihang University, Tsinghua University, and the National University of Singapore has the potential to redefine how robots interact with the world around them.
The blueprint for this highly adaptable robot draws upon the intelligent, soft-bodied mechanics of an octopus, enabling smooth movements across a variety of surfaces and environments with precision. The sensorized soft arm, lovingly named the electronics-integrated soft octopus arm mimic (E-SOAM), embodies advancements in soft robotics with its incorporation of elastic materials and sophisticated liquid metal circuits that remain resilient under extreme deformation.
Elon was left “speechless” when confronted with the possibility that AI could follow humans to another planet.
An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose AI systems powered by a relatively new type of AI model called a liquid neural network.
The spinoff, aptly named Liquid AI, emerged from stealth this morning and announced that it has raised $37.5 million — substantial for a two-stage seed round — from VCs and organizations including OSS Capital, PagsGroup, WordPress parent company Automattic, Samsung Next, Bold Capital Partners and ISAI Cap Venture, as well as angel investors like GitHub co-founder Tom Preston Werner, Shopify co-founder Tobias Lütke and Red Hat co-founder Bob Young.
The tranche values Liquid AI at $303 million post-money.
📸 Watch this video on Facebook https://www.facebook.com/share/v/NNeZinMSuGPtQDXL/?mibextid=i5uVoL
Working together as a consortium, FAIR or university partners captured these perspectives with the help of more than 800 skilled participants in the United States, Japan, Colombia, Singapore, India, and Canada. In December, the consortium will open source the data (including more than 1,400 hours of video) and annotations for novel benchmark tasks. Additional details about the datasets can be found in our technical paper. Next year, we plan to host a first public benchmark challenge and release baseline models for ego-exo understanding. Each university partner followed their own formal review processes to establish the standards for collection, management, informed consent, and a license agreement prescribing proper use. Each member also followed the Project Aria Community Research Guidelines. With this release, we aim to provide the tools the broader research community needs to explore ego-exo video, multimodal activity recognition, and beyond.
How Ego-Exo4D works.
Ego-Exo4D focuses on skilled human activities, such as playing sports, music, cooking, dancing, and bike repair. Advances in AI understanding of human skill in video could facilitate many applications. For example, in future augmented reality (AR) systems, a person wearing smart glasses could quickly pick up new skills with a virtual AI coach that guides them through a how-to video; in robot learning, a robot watching people in its environment could acquire new dexterous manipulation skills with less physical experience; in social networks, new communities could form based on how people share their expertise and complementary skills in video.
Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.
Gemini is being released in form of upgrade to Google’s chatbot Bard but not yet in UK or EU.