Toggle light / dark theme

Engineers use artificial intelligence to capture the complexity of breaking waves

Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

Japanese rail company rolls out VR-piloted Gundam robot worker

The West Japan Rail Company has released video of its new humanoid heavy equipment robot. Mounted on the end of a crane, this gundam-style robot torso mimics the arm and head motions of a human pilot, who sees through the robot’s eyes via VR goggles.

The key objectives here, according to the company, are “to improve productivity and safety,” enabling workers to lift and naturally manipulate heavy equipment around the rail system without exposing them to the risk of electric shocks or falling.

The robot’s large torso is mounted to a hydraulic crane arm, which rides around the rail system on a specially braced rail car, putting down stabilizing legs when it’s time to get to work.

A tiny research robot is living with an Antarctica penguin colony

ECHO, the robot, belongs to the Woods Hole Oceanographic Institution and rolls around the tundra collecting data used to study marine ecosystems.

The small robot takes readings and collects data like a normal researcher, but his existence allows researchers to collect real-time information year round and minimize the impact their presence could have on the animals’ lives.

Researchers say the penguins seem to be getting along swimmingly with the robot.

[Exclusive] Elon Musk: A future worth getting excited about | TED | Tesla Gigafactory interview

Elon talks about x-risks and making us a multi-planetary species, amongst other things.


What’s on Elon Musk’s mind? In this exclusive conversation with head of TED Chris Anderson, Musk details how the radical new innovations he’s working on — Tesla’s intelligent humanoid robot Optimus, SpaceX’s otherworldly Starship and Neuralink’s brain-machine interfaces, among others — could help maximize the lifespan of humanity and create a world where goods and services are abundant and accessible for all. It’s a compelling vision of a future worth getting excited about. (Recorded at the Tesla Texas Gigafactory on April 6, 2022)

Just over a week after this interview was filmed, Elon Musk joined TED2022 for another (live) conversation, where he discussed his bid to purchase Twitter, the biggest regret of his career, how his brain works and more. Watch that conversation here: https://youtu.be/cdZZpaB2kDM

0:14 A future that’s worth getting excited about.
2:44 The sustainable energy economy, batteries and 300 terawatt hours of installed capacity.
7:06 “Humanity will solve sustainable energy.“
8:47 Artificial intelligence and Tesla’s progress on full self-driving cars.
19:46 Tesla’s Optimus humanoid robot.
21:46 “People have no idea, this is going to be bigger than the car.“
23:14 Avoiding an AI dystopia.
26:39 The age of abundance.
28:20 Neuralink and brain-machine interfaces.
36:55 SpaceX’s Starship and the mission to build a city on Mars.
46:54 “It’s the people of Mars’ city.“
50:14 What else can Starship do and help explore?
53:18 Possible synergies between Tesla, SpaceX, The Boring Company and Neuralink.
54:44 Intercontinental travel via Starship.
58:41 Being a billionaire.
1:02:31 Philanthropy as love of humanity.
1:03:39 Population collapse and birth rates as a threat to future of human civilization.
1:04:13 Elon’s drive.
1:06:06 “I think if you want the future to be good, you must make it so.”

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: http://ted.com/membership.

Tackling multiple tasks with a single visual language model

One key aspect of intelligence is the ability to quickly learn how to perform a new task when given a brief instruction. For instance, a child may recognise real animals at the zoo after seeing a few pictures of the animals in a book, despite any differences between the two. But for a typical visual model to learn a new task, it must be trained on tens of thousands of examples specifically labelled for that task. If the goal is to count and identify animals in an image, as in “three zebras”, one would have to collect thousands of images and annotate each image with their quantity and species. This process is inefficient, expensive, and resource-intensive, requiring large amounts of annotated data and the need to train a new model each time it’s confronted with a new task. As part of DeepMind’s mission to solve intelligence, we’ve explored whether an alternative model could make this process easier and more efficient, given only limited task-specific information.

Today, in the preprint of our paper, we introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks. This means Flamingo can tackle a number of difficult problems with just a handful of task-specific examples (in a “few shots”), without any additional training required. Flamingo’s simple interface makes this possible, taking as input a prompt consisting of interleaved images, videos, and text and then output associated language.

Similar to the behaviour of large language models (LLMs), which can address a language task by processing examples of the task in their text prompt, Flamingo’s visual and text interface can steer the model towards solving a multimodal task. Given a few example pairs of visual inputs and expected text responses composed in Flamingo’s prompt, the model can be asked a question with a new image or video, and then generate an answer.

AI News Timestamps

0:00 PeopleLens AI Helps The Blind.
1:40 Brain Fingerprints Detect Autism.
4:52 AI Predicts Cancer Tumor Regrowth.

Learn more about the future of decentralized AI here:
SingularityNET AGIX Website — https://singularitynet.io
Developer Documentation — https://dev.singularitynet.io/
Publish AI Services — https://publisher.singularitynet.io/
AGIX Community Telegram — https://t.me/singularitynet
AGIX Price Chat Telegram — https://t.me/AGIPriceTalk

SingularityDAO Dynamic Asset Sets: https://bit.ly/3wzr00o.
SingularityDAO AI DeFi Website — https://bit.ly/3npymhA
SingularityDAO AI DeFi App — https://bit.ly/3K9TvWM
SingularityDAO Twitter — https://twitter.com/SingularityDao.
SingularityDAO Medium — https://medium.com/singularitydao.
SingularityDAO Telegram — https://t.me/SingularityDAO

#AI #News #ML

Artificial Intelligence Is Already Outsmarting Humans!

Calculator is also more smart than humans but it made our life easier.


While the win over Google’s AI was impressive, more advanced developments are being made behind the scenes as well. The integration of multiple AIs, called neural networks, has led to computers with unique personalities and quirks that were previously only seen in humans.

AI is already outsmarting humans

When we talk about artificial intelligence (AI), our minds often turn to visions of a robot uprising, sentient computers, and other science-fiction fantasies. As it turns out, though, AI has already started doing some pretty impressive things in real life: Researchers at Google just announced that their AI was able to beat one of mankind’s best players at Go—an ancient Chinese game so complex that a computer had never beaten a human master. The victory was significant for more than one reason: first, it shows that AlphaGo can already outsmart humans; second, it suggests there’s real potential for AI in solving problems we haven’t been able to solve before.