Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1074

Jun 4, 2021

Google and Harvard map brain connections in unprecedented detail

Posted by in categories: information science, mapping, robotics/AI

The researchers started with a sample taken from the temporal lobe of a human cerebral cortex, measuring just 1 mm3. This was stained for visual clarity, coated in resin to preserve it, and then cut into about 5300 slices each about 30 nanometers (nm) thick. These were then imaged using a scanning electron microscope, with a resolution down to 4 nm. That created 225 million two-dimensional images, which were then stitched back together into one 3D volume.

Machine learning algorithms scanned the sample to identify the different cells and structures within. After a few passes by different automated systems, human eyes “proofread” some of the cells to ensure the algorithms were correctly identifying them.

Continue reading “Google and Harvard map brain connections in unprecedented detail” »

Jun 4, 2021

Somehow This Robot Sticks to Ceilings

Posted by in category: robotics/AI

It’s either some obscure fluid effect or black magic.


Just when I think I’ve seen every possible iteration of climbing robot, someone comes up with a new way of getting robots to stick to things. The latest technique comes from the Bioinspired Robotics and Design Lab at UCSD, where they’ve managed to get a robot to stick to smooth surfaces using a vibrating motor attached to a flexible disk. How the heck does it work?

Continue reading “Somehow This Robot Sticks to Ceilings” »

Jun 4, 2021

China Says WuDao 2.0 AI Is an Even Better Conversationalist than OpenAI, Google

Posted by in category: robotics/AI

The Beijing Academy of Artificial Intelligence (BAAI) researchers announced this week a natural language processing model called WuDao 2.0 that, per the South China Morning Post, is more advanced than similar models developed by OpenAI and Google.

The report said WuDao 2.0 uses 1.75 trillion parameters to “simulate conversational speech, write poems, understand pictures and even generate recipes.” The models developed by OpenAI and Google are supposed to do similar things, but they use fewer parameters to do so, which means WuDao 2.0 is likely better at those tasks.

Jun 3, 2021

Biological Robots May Soon Build You a Better Heart

Posted by in categories: biotech/medical, robotics/AI

Biobots could help us with new organs! 😃


Computer scientists and biologists have teamed up to create a creature heretofore unseen on Earth: a living robot. Made from the cells of frogs and designed by artificial intelligence, they’re called xenobots, and they may soon revolutionize everything from how we fight pollution to organ transplants.

Continue reading “Biological Robots May Soon Build You a Better Heart” »

Jun 3, 2021

DARPA Calling for AI Proposals to Measure How Authoritarian Regimes Control Information

Posted by in categories: internet, robotics/AI

Exciting.


Tech developed under the program could help the Defense Department react to repressive actions in cyberspace.

Jun 3, 2021

China’s gigantic multi-modal AI is no one-trick pony

Posted by in categories: robotics/AI, supercomputing

When Open AI’s GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It’s been trained on 1.75 trillion parameters (essentially, the model’s self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google’s Switch Transformers.

In order to train a model on this many parameters and do so quickly — Wu Dao 2.0 arrived just three months after version 1.0’s release in March — the BAAI researchers first developed an open-source learning system akin to Google’s Mixture of Experts, dubbed FastMoE. This system, which is operable on PyTorch, enabled the model to be trained both on clusters of supercomputers and conventional GPUs. This gave FastMoE more flexibility than Google’s system since FastMoE doesn’t require proprietary hardware like Google’s TPUs and can therefore run on off-the-shelf hardware — supercomputing clusters notwithstanding.

Jun 3, 2021

New drone is first to reach Level 4 autonomy

Posted by in categories: drones, robotics/AI

In simple terms, comparing previous autonomy standards with that of Exyn is like the difference between self-navigating a single, defined road versus uncharted terrain in unknown and unmapped territory. Unlike a car, however, a drone must be able to manoeuvre within three dimensions and pack all its intelligence and sensors onto a fraction of the total body size with severe weight restrictions.

“People have been talking about Level 4 Autonomy in driverless cars for some time, but having that same degree of intelligence condensed onboard a self-sufficient UAV is an entirely different engineering challenge in and of itself,” said Jason Derenick, CTO at Exyn Technologies. “Achieving Level 5 is the holy grail of autonomous systems – this is when the drone can demonstrate 100% control in an unbounded environment, without any input from a human operator whatsoever. While I don’t believe we will witness this in my lifetime, I do believe we will push the limits of what’s possible with advanced Level 4. We are already working on attaining Level 4B autonomy with swarms, or collaborative multi-robot systems.”

“There’s things that we want to do to make it faster, make it higher resolution, make it more accurate,” said Elm, in an interview with Forbes. “But the other thing we were kind of contemplating is basically the ability to have multiple robots collaborate with each other so you can scale the problem – both in terms of scale and scope. So you can have multiple identical robots on a mission, so you can actually now cover a larger area, but also have specialised robots that might be different. So, heterogeneous swarms so they can actually now have specialised tasks and collaborate with each other on a mission.”

Jun 3, 2021

A programmable fiber contains memory, temperature sensors, and a trained neural network program

Posted by in category: robotics/AI

MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.

Jun 2, 2021

NASA picks Venus as hot spot for two new robotic missions

Posted by in categories: robotics/AI, space

NASA is returning to sizzling Venus, our closest yet perhaps most overlooked neighbour, after decades of exploring other worlds.

The US space agency’s new administrator, Bill Nelson, announced two new robotic missions to the solar system’s hottest planet, during his first major address to employees.

“These two sister missions both aim to understand how Venus became an inferno-like world capable of melting lead at the surface,” Nelson said.

Jun 2, 2021

Intel’s image-enhancing AI is a step forward for photorealistic game engines

Posted by in categories: entertainment, robotics/AI

For reference, we can go back to the HRNet paper. The researchers used a dedicated Nvidia V100, a massive and extremely expensive GPU specially designed for deep learning inference. With no memory limitation and no hindrance by other in-game computations, the inference time for the V100 was 150 milliseconds per input, which is ~7 fps, not nearly enough to play a smooth game.

Development and training neural networks

Continue reading “Intel’s image-enhancing AI is a step forward for photorealistic game engines” »