Toggle light / dark theme

These electric submarines map the seafloor to make way for wind power

In March, the departments of Energy, Interior and Commerce said they were aiming for U.S. offshore wind capacity to hit 30 gigawatts (GW) by 2,030 a hugely optimistic goal that would require thousands of new wind turbines to be installed off the Atlantic, Pacific and Gulf coasts.

With federal support locked in, now it’s up to developers and operators to figure out where it’s safe to install offshore wind farms and pursue permits.

Bedrock, a Richmond, California, start-up, wants to help them map the seafloor using electric autonomous underwater vehicles (e-AUV) that can launch right from the shore.

Persephone, the robot guide, leads visitors in a Greek cave

Markes listened to the first three stops from the robοt in his native language and the rest in English from a human tour guide.

“I should thank Persephone, our robot, she said very fine things,” said Christos Tenis, a Greek visitor. “I’m impressed by the cave. Of course, we had a flawless (human) guide, she explained many things. I’m very impressed.”

Persephone is not the only technology used inside the cave. There’s a cellphone app in which a visitor, scanning a QR code, can see the Alistrati Beroni. That’s a microorganism that is only found in this cave, in the huge mounds of bat droppings left behind when the cave was opened and the bats migrated elsewhere.

Silicon Valley Neologisms: The Palantir Edition

Do you remember the Zuckerland metaverse? (Yes, I know he borrowed the word, but when you are president of a digital country, does anyone dare challenge Zuck the First, Le Roi Numérique?)

Palantir Technologies (the Seeing Stone outfit with the warm up jacket fashion bug) introduced a tasty bit of jargon-market speak in its Q22021earnings call:

Palantir’s meta-constellation software harnesses the power of growing satellite constellations, deploying AI into space to provide insights to decision-makers here on Earth. Our meta-constellation integrates with existing satellites, optimizing hundreds of orbital sensors and AI models and allowing users to ask time-sensitive questions across the entire planet. Important questions like, where are the indicators of wildfires or how are climate changes affecting crop productivity? And when and where are naval fleets conducting operations? Meta-constellation pushes Palantir’s Edge AI technology to a new frontier.

Robots vs. Humans | Do humans have a chance to defeat robots?

Could prove useful when robots try to terminate us. 😃

The robot slicing that pea pod in half, lengthwise with a samurai sword was very impressive. Where is Kenshin Himura when you need him.


✅ Instagram: https://www.instagram.com/pro_robots.

You’re on the PRO Robotics channel, and in today’s episode we’re going to find out if humans have a chance of defeating robots. In today’s world, robots are increasingly replacing humans in various fields. More and more often we hear that robots are leaving humans unemployed, increasingly taking over the world and humans simply don’t stand a chance anymore. Is this true? Depends on the type of competition, watch the top battles between robots and humans and find out what we can do better than smart machines.
Watch the video until the end and write in the comments, what was your final score, who do you think is cooler, robot or human?

0:00 In this video.

## Center for Research on Foundation Models • Aug 16 2021

# On the Opportunities and.
Risks of Foundation Models.

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.

This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations).

Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization.

Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream.

Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.

To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

Val Kilmer Recreated His Speaking Voice Using Artificial Intelligence and Hours of Old Audio

🥲👍


Val Kilmer marked the release of his acclaimed documentary “Val” (now streaming on Amazon Prime Video) in a milestone way: He recreated his old speaking voice by feeding hours of recorded audio of himself into an artificial intelligence algorithm. Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. Kilmer’s team recently joined forces with software company Sonantic and “Val” distributor Amazon to “create an emotional and lifelike model of his old speaking voice” (via The Wrap).

“I’m grateful to the entire team at Sonantic who masterfully restored my voice in a way I’ve never imagined possible,” Val Kilmer said in a statement. “As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.”

An AI expert explains why it’s hard to give computers something you take for granted: Common sense

Quick – define common sense

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that “common sense is a wild thing, savage, and beyond rules.” Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Common sense is unusually broad and includes not only social abilities, like managing expectations and reasoning about other people’s emotions, but also a naive sense of physics, such as knowing that a heavy rock cannot be safely placed on a flimsy plastic table. Naive, because people know such things despite not consciously working through physics equations.

Team develops AI to decode brain signals and predict behavior

An artificial neural network (AI) designed by an international team involving UCL can translate raw data from brain activity, paving the way for new discoveries and a closer integration between technology and the brain.

The new method could accelerate discoveries of how brain activities relate to behaviors.

The study published today in eLife, co-led by the Kavli Institute for Systems Neuroscience in Trondheim and the Max Planck Institute for Human Cognitive and Brain Sciences Leipzig and funded by Wellcome and the European Research Council, shows that a , a specific type of deep learning , is able to decode many different behaviors and stimuli from a wide variety of brain regions in different species, including humans.