Google’s new AI model Gemini launched today inside the Bard chatbot. It could go on to advance robotics and other projects, says Demis Hassabis, the AI executive leading the project.
Category: robotics/AI – Page 586
2024 will see the arrival of the first AI PCs from hardware companies, and a new report indicates that they’re set to launch alongside Windows 12 in June.
Elon was left “speechless” when confronted with the possibility that AI could follow humans to another planet.
An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose AI systems powered by a relatively new type of AI model called a liquid neural network.
The spinoff, aptly named Liquid AI, emerged from stealth this morning and announced that it has raised $37.5 million — substantial for a two-stage seed round — from VCs and organizations including OSS Capital, PagsGroup, WordPress parent company Automattic, Samsung Next, Bold Capital Partners and ISAI Cap Venture, as well as angel investors like GitHub co-founder Tom Preston Werner, Shopify co-founder Tobias Lütke and Red Hat co-founder Bob Young.
The tranche values Liquid AI at $303 million post-money.
In today’s Google Bard news today was the fact that it was trained and is running on a new version of the company’s homegrown TPU chips for AI.
The new TPU v5p is a core element of AI Hypercomputer, which is tuned, managed, and orchestrated specifically for gen AI training and serving.
📸 Watch this video on Facebook https://www.facebook.com/share/v/NNeZinMSuGPtQDXL/?mibextid=i5uVoL
Working together as a consortium, FAIR or university partners captured these perspectives with the help of more than 800 skilled participants in the United States, Japan, Colombia, Singapore, India, and Canada. In December, the consortium will open source the data (including more than 1,400 hours of video) and annotations for novel benchmark tasks. Additional details about the datasets can be found in our technical paper. Next year, we plan to host a first public benchmark challenge and release baseline models for ego-exo understanding. Each university partner followed their own formal review processes to establish the standards for collection, management, informed consent, and a license agreement prescribing proper use. Each member also followed the Project Aria Community Research Guidelines. With this release, we aim to provide the tools the broader research community needs to explore ego-exo video, multimodal activity recognition, and beyond.
How Ego-Exo4D works.
Ego-Exo4D focuses on skilled human activities, such as playing sports, music, cooking, dancing, and bike repair. Advances in AI understanding of human skill in video could facilitate many applications. For example, in future augmented reality (AR) systems, a person wearing smart glasses could quickly pick up new skills with a virtual AI coach that guides them through a how-to video; in robot learning, a robot watching people in its environment could acquire new dexterous manipulation skills with less physical experience; in social networks, new communities could form based on how people share their expertise and complementary skills in video.
Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.
Gemini is being released in form of upgrade to Google’s chatbot Bard but not yet in UK or EU.
Google DeepMind has developed an AI agent system that can learn tasks from a human instructor. After enough time, the AI agent can not only imitate the actions of the human instructor but also recall the observed behavior.
In a paper published in Nature, researchers outline a process called cultural transmission to train the AI model without using any pre-collected human data.
This is a new form of imitative learning that the DeepMind researchers contend allows for more efficient transmission of skills to an AI. Think of it like watching a video tutorial – you watch, learn and apply the teachings as well as remember the video’s lessons later on.
Summary: Researchers developed an AI-based method to track neurons in moving and deforming animals, a significant advancement in neuroscience research. This convolutional neural network (CNN) method overcomes the challenge of tracking brain activity in organisms like worms, whose bodies constantly change shape.
By employing ‘targeted augmentation’, the AI significantly reduces the need for manual image annotation, streamlining the neuron identification process. Tested on the roundworm Caenorhabditis elegans, this technology has not only increased analysis efficiency but also deepened insights into complex neuronal behaviors.