Toggle light / dark theme

The West Japan Rail Company has released video of its new humanoid heavy equipment robot. Mounted on the end of a crane, this gundam-style robot torso mimics the arm and head motions of a human pilot, who sees through the robot’s eyes via VR goggles.

The key objectives here, according to the company, are “to improve productivity and safety,” enabling workers to lift and naturally manipulate heavy equipment around the rail system without exposing them to the risk of electric shocks or falling.

The robot’s large torso is mounted to a hydraulic crane arm, which rides around the rail system on a specially braced rail car, putting down stabilizing legs when it’s time to get to work.

ECHO, the robot, belongs to the Woods Hole Oceanographic Institution and rolls around the tundra collecting data used to study marine ecosystems.

The small robot takes readings and collects data like a normal researcher, but his existence allows researchers to collect real-time information year round and minimize the impact their presence could have on the animals’ lives.

Researchers say the penguins seem to be getting along swimmingly with the robot.

Elon talks about x-risks and making us a multi-planetary species, amongst other things.


What’s on Elon Musk’s mind? In this exclusive conversation with head of TED Chris Anderson, Musk details how the radical new innovations he’s working on — Tesla’s intelligent humanoid robot Optimus, SpaceX’s otherworldly Starship and Neuralink’s brain-machine interfaces, among others — could help maximize the lifespan of humanity and create a world where goods and services are abundant and accessible for all. It’s a compelling vision of a future worth getting excited about. (Recorded at the Tesla Texas Gigafactory on April 6, 2022)

Just over a week after this interview was filmed, Elon Musk joined TED2022 for another (live) conversation, where he discussed his bid to purchase Twitter, the biggest regret of his career, how his brain works and more. Watch that conversation here: https://youtu.be/cdZZpaB2kDM

One key aspect of intelligence is the ability to quickly learn how to perform a new task when given a brief instruction. For instance, a child may recognise real animals at the zoo after seeing a few pictures of the animals in a book, despite any differences between the two. But for a typical visual model to learn a new task, it must be trained on tens of thousands of examples specifically labelled for that task. If the goal is to count and identify animals in an image, as in “three zebras”, one would have to collect thousands of images and annotate each image with their quantity and species. This process is inefficient, expensive, and resource-intensive, requiring large amounts of annotated data and the need to train a new model each time it’s confronted with a new task. As part of DeepMind’s mission to solve intelligence, we’ve explored whether an alternative model could make this process easier and more efficient, given only limited task-specific information.

Today, in the preprint of our paper, we introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks. This means Flamingo can tackle a number of difficult problems with just a handful of task-specific examples (in a “few shots”), without any additional training required. Flamingo’s simple interface makes this possible, taking as input a prompt consisting of interleaved images, videos, and text and then output associated language.

Similar to the behaviour of large language models (LLMs), which can address a language task by processing examples of the task in their text prompt, Flamingo’s visual and text interface can steer the model towards solving a multimodal task. Given a few example pairs of visual inputs and expected text responses composed in Flamingo’s prompt, the model can be asked a question with a new image or video, and then generate an answer.

0:00 PeopleLens AI Helps The Blind.
1:40 Brain Fingerprints Detect Autism.
4:52 AI Predicts Cancer Tumor Regrowth.

Learn more about the future of decentralized AI here:
SingularityNET AGIX Website — https://singularitynet.io
Developer Documentation — https://dev.singularitynet.io/
Publish AI Services — https://publisher.singularitynet.io/
AGIX Community Telegram — https://t.me/singularitynet
AGIX Price Chat Telegram — https://t.me/AGIPriceTalk

SingularityDAO Dynamic Asset Sets: https://bit.ly/3wzr00o.
SingularityDAO AI DeFi Website — https://bit.ly/3npymhA
SingularityDAO AI DeFi App — https://bit.ly/3K9TvWM
SingularityDAO Twitter — https://twitter.com/SingularityDao.
SingularityDAO Medium — https://medium.com/singularitydao.
SingularityDAO Telegram — https://t.me/SingularityDAO

#AI #News #ML

Calculator is also more smart than humans but it made our life easier.


While the win over Google’s AI was impressive, more advanced developments are being made behind the scenes as well. The integration of multiple AIs, called neural networks, has led to computers with unique personalities and quirks that were previously only seen in humans.

AI is already outsmarting humans

When we talk about artificial intelligence (AI), our minds often turn to visions of a robot uprising, sentient computers, and other science-fiction fantasies. As it turns out, though, AI has already started doing some pretty impressive things in real life: Researchers at Google just announced that their AI was able to beat one of mankind’s best players at Go—an ancient Chinese game so complex that a computer had never beaten a human master. The victory was significant for more than one reason: first, it shows that AlphaGo can already outsmart humans; second, it suggests there’s real potential for AI in solving problems we haven’t been able to solve before.

“Pressure and mobility have an inverse relationship,” Diaz Artiles said. “The more pressure you have in the spacesuit, the lower the mobility. The less pressure you have, the easier it is to move around.”

“Imagine wearing really tight Under Armour or really tight leggings. That pressure pushing down on your body would be in replace of or in addition to gas pressure,” Kluis said. “So the idea with the SmartSuit is that it would use both mechanical pressure and gas pressure.”

Diaz Artiles and her team continue to work on the SmartSuit architecture, and the actuator prototypes are a promising development in creating a more accommodating and resourceful spacesuit for future planetary missions. Their end goal would be for it to feel like the wearer is moving without the spacesuit on and without breaking too much of a sweat.