Toggle light / dark theme

The researchers started with a sample taken from the temporal lobe of a human cerebral cortex, measuring just 1 mm3. This was stained for visual clarity, coated in resin to preserve it, and then cut into about 5300 slices each about 30 nanometers (nm) thick. These were then imaged using a scanning electron microscope, with a resolution down to 4 nm. That created 225 million two-dimensional images, which were then stitched back together into one 3D volume.

Machine learning algorithms scanned the sample to identify the different cells and structures within. After a few passes by different automated systems, human eyes “proofread” some of the cells to ensure the algorithms were correctly identifying them.

The end result, which Google calls the H01 dataset, is one of the most comprehensive maps of the human brain ever compiled. It contains 50000 cells and 130 million synapses, as well as smaller segments of the cells such axons, dendrites, myelin and cilia. But perhaps the most stunning statistic is that the whole thing takes up 1.4 petabytes of data – that’s more than a million gigabytes.

It’s either some obscure fluid effect or black magic.


Just when I think I’ve seen every possible iteration of climbing robot, someone comes up with a new way of getting robots to stick to things. The latest technique comes from the Bioinspired Robotics and Design Lab at UCSD, where they’ve managed to get a robot to stick to smooth surfaces using a vibrating motor attached to a flexible disk. How the heck does it work?

The Beijing Academy of Artificial Intelligence (BAAI) researchers announced this week a natural language processing model called WuDao 2.0 that, per the South China Morning Post, is more advanced than similar models developed by OpenAI and Google.

The report said WuDao 2.0 uses 1.75 trillion parameters to “simulate conversational speech, write poems, understand pictures and even generate recipes.” The models developed by OpenAI and Google are supposed to do similar things, but they use fewer parameters to do so, which means WuDao 2.0 is likely better at those tasks.

Biobots could help us with new organs! 😃


Computer scientists and biologists have teamed up to create a creature heretofore unseen on Earth: a living robot. Made from the cells of frogs and designed by artificial intelligence, they’re called xenobots, and they may soon revolutionize everything from how we fight pollution to organ transplants.

#Xenobots #Moonshot #BloombergQuicktake.
——-
Like this video? Subscribe: https://www.youtube.com/Bloomberg?sub_confirmation=1
Become a Quicktake Member for exclusive perks: https://www.youtube.com/bloomberg/join.

QuickTake Originals is Bloomberg’s official premium video channel. We bring you insights and analysis from business, science, and technology experts who are shaping our future. We’re home to Hello World, Giant Leap, Storylines, and the series powering CityLab, Bloomberg Businessweek, Bloomberg Green, and much more.

Subscribe for business news, but not as you’ve known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.

When Open AI’s GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It’s been trained on 1.75 trillion parameters (essentially, the model’s self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google’s Switch Transformers.

In order to train a model on this many parameters and do so quickly — Wu Dao 2.0 arrived just three months after version 1.0’s release in March — the BAAI researchers first developed an open-source learning system akin to Google’s Mixture of Experts, dubbed FastMoE. This system, which is operable on PyTorch, enabled the model to be trained both on clusters of supercomputers and conventional GPUs. This gave FastMoE more flexibility than Google’s system since FastMoE doesn’t require proprietary hardware like Google’s TPUs and can therefore run on off-the-shelf hardware — supercomputing clusters notwithstanding.

In simple terms, comparing previous autonomy standards with that of Exyn is like the difference between self-navigating a single, defined road versus uncharted terrain in unknown and unmapped territory. Unlike a car, however, a drone must be able to manoeuvre within three dimensions and pack all its intelligence and sensors onto a fraction of the total body size with severe weight restrictions.

“People have been talking about Level 4 Autonomy in driverless cars for some time, but having that same degree of intelligence condensed onboard a self-sufficient UAV is an entirely different engineering challenge in and of itself,” said Jason Derenick, CTO at Exyn Technologies. “Achieving Level 5 is the holy grail of autonomous systems – this is when the drone can demonstrate 100% control in an unbounded environment, without any input from a human operator whatsoever. While I don’t believe we will witness this in my lifetime, I do believe we will push the limits of what’s possible with advanced Level 4. We are already working on attaining Level 4B autonomy with swarms, or collaborative multi-robot systems.”

“There’s things that we want to do to make it faster, make it higher resolution, make it more accurate,” said Elm, in an interview with Forbes. “But the other thing we were kind of contemplating is basically the ability to have multiple robots collaborate with each other so you can scale the problem – both in terms of scale and scope. So you can have multiple identical robots on a mission, so you can actually now cover a larger area, but also have specialised robots that might be different. So, heterogeneous swarms so they can actually now have specialised tasks and collaborate with each other on a mission.”

NASA is returning to sizzling Venus, our closest yet perhaps most overlooked neighbour, after decades of exploring other worlds.

The US space agency’s new administrator, Bill Nelson, announced two new robotic missions to the solar system’s hottest planet, during his first major address to employees.

“These two sister missions both aim to understand how Venus became an inferno-like world capable of melting lead at the surface,” Nelson said.

For reference, we can go back to the HRNet paper. The researchers used a dedicated Nvidia V100, a massive and extremely expensive GPU specially designed for deep learning inference. With no memory limitation and no hindrance by other in-game computations, the inference time for the V100 was 150 milliseconds per input, which is ~7 fps, not nearly enough to play a smooth game.

Development and training neural networks

Another vexing problem is the development and training costs of the image-enhancing neural network. Any company that would want to replicate Intel’s deep learning models will need three things: data, computing resources, and machine learning talent.