Toggle light / dark theme

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.

Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company – Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.

To check out any of the lectures available from Great Courses Plus go to http://ow.ly/dweH302dILJ

We’ll soon be capable of building self-replicating robots. This will not only change humanity’s future but reshape the galaxy as we know it.

Get your own Space Time t­shirt at http://bit.ly/1QlzoBi.
Tweet at us! @pbsspacetime.
Facebook: facebook.com/pbsspacetime.
Email us! pbsspacetime [at] gmail [dot] com.
Comment on Reddit: http://www.reddit.com/r/pbsspacetime.
Support us on Patreon! http://www.patreon.com/pbsspacetime.

Help translate our videos! https://www.youtube.com/timedtext_cs_panel?tab=2&c=UC7_gcs09iThXybpVgjHZ_7g.

A new kind of Technology developed by Meta AI will enable more intelligent and efficient robots to enter our homes and replace humans in warehouses through advances in Artificial Intelligence. DIGIT and ReSkin are two advanced technologies that enable Robots to have better feelings than even some humans. One of the biggest and best AI Scientists, Yann LeCun is working on this very futuristic technology that may be considered one of the best robots and AI’s of 2021. Through Deep Learning and machine learning robotics, the smart humanoid robots will be abilities previously thought impossible.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 A new type of Robot.
02:25 A new way to sense the world.
04:45 Is this technology for everyone?
07:13 DIGIT and the Metaverse.
08:32 Last Words.

#ai #meta #facebook

Deadlines.

You either love them, hate them, or experience both sentiments at the same time.

For AI-based true self-driving cars, there isn’t a human driver involved. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there normally a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the l… See more.


A deadline can be handy as a focal point that aids in rallying together everyone towards achieving something great. On the other hand, a deadline can turn each person against the other and fester a bitter fight that leaves all involved forever scarred and upset due to a seemingly arbitrary and reprehensible line in the sand.

The wheeled Jaeger-C is a small machine with a low profile designed to attack from ambush. In some ways, it might be seen as a mobile robotic mine. This is especially true because the makers note it can be remote-controlled or “autonomously with image analysis and trained models linked to robotic actions,” according to a report in Overt Defense. This sounds very much like the sort of deep learning increasingly used for other automatic target recognition, a trend driven by the ready availability of new, low-cost hardware for small uncrewed systems.

The Jaeger-C will sit in ambush in Gaard mode – a long-term silent watch mode – until it detects potential targets. It will then switch into either Chariot mode or Goliath mode depending on whether the targets are personnel or vehicles.

The Air Force’s Skyborg team flew two General Atomics MQ-20 Avenger stealth drones on the “multi-hour” Oct. 26 flight over California. One of the Ave… See more.


Two stealth drones soared over Edwards Air Force Base in California last week, offering some encouraging evidence that the U.S. Air Force’s new drone “brain” not only works—it works with a bunch of different drone types.

The Air Force hopes to install the Skyborg autonomy core system in a wide array of unmanned aerial vehicles. The idea is for the ACS to steer armed drones with minimal human control—even in the heat of battle. That way the drones can fly as robotic wingmen for manned fighters without demanding too much of the busy human pilots.

The value of attending major industry events like Nvidia’s GTC (GPU Technology Conference) is to see what companies are and are not focusing on going forward. Nvidia has transformed the agenda for GTC from gaming into one of the leading AI events. The agenda also includes HPC and data center networking topics, representing other areas Nvidia has been expanding into in the last few years. If the agenda for the upcoming GTC event is any indication, the company has greatly increased its focus on autonomous machines, which includes all forms of robotics.

In addition to autonomous vehicles, this GTC agenda includes more than ten sessions focused on autonomous machines. As the company has done with other market segments, the autonomous machines sessions will bring together experts from academia, the industry, and Nvidia to provide training, industry insights, and technical assistance in AI and robotics. Some of the experts attending include Brian Gerkey, Co-founder and CEO of Open Robotics, Patty Delafuente from the University of Maryland, Ajit Jaokar and Ayşe Mutlu form the University of Oxford, and Johan Barthelemy from the University of Wollongong. There will also be AI and robotics experts from Denso Wave, Digeiz, Hammerson, Integral AI, Milestone Systems, Nota, and SK Telecom presenting at the conference.