Toggle light / dark theme

And it looks like a big yak.

China’s state media, the Global Times, claims the country has developed the world’s largest electric-powered quadruped bionic robot. And to be honest, that thing looks just like a yak.

Bizarre appearances aside, this comes as the latest in China’s push to become a global leader in robotics by 2025. And also, of course, in military tech.… See more.


China claims that it has developed the largest electric-powered quadruped robot in the world! And the nation is rapidly approaching its 2025 goal.

China has introduced what it claims to be the world’s largest electrically-powered quadruped robot to assist the military on logistics and reconnaissance missions.

With a “yak-like appearance,” the four-legged robot can reportedly carry up to 352 pounds (160 kilograms) of payload and run at six miles (10 kilometers) per hour.

The platform’s structure is designed to withstand challenging off-grid military missions and conquer a wide variety of terrain, including cliffs, trenches, grasslands, fields, deserts, snow, and muddy roads.

Mock seemed pleased with the outcome. “You could look at this and say, ‘O.K., the A.I. got five, our human got zero,’” he told viewers. “From the fighter-pilot world, we trust what works, and what we saw was that in this limited area, this specific scenario, we’ve got A.I. that works.” (A YouTube video of the trials has since garnered half a million views.)

Brett Darcey, who runs Heron, told me that the company has used Falco to fly drones, completing seventy-four flights with zero crashes. But it’s still unclear how the technology will react to the infinite possibilities of real-world conditions. The human mind processes more slowly than a computer, but it has the cognitive flexibility to adapt to unimagined circumstances; artificial intelligence, so far, does not. Anna Skinner, a human-factors psychologist, and another science adviser to the ACE program, told me, “Humans are able to draw on their experience and take reasonable actions in the face of uncertainty. And, especially in a combat situation, uncertainty is always going to be present.”

What’s next? Human brain-scale AI.

Funded by the Slovakian government using funds allocated by the EU, the I4DI consortium is behind the initiative to build a 64 AI exaflop machine (that’s 64 billion, billion AI operations per second) on our platform by the end of 2022. This will enable Slovakia and the EU to deliver for the first time in the history of humanity a human brain-scale AI supercomputer. Meanwhile, almost a dozen other countries are watching this project closely, with interest in replicating this supercomputer in their own countries.

There are multiple approaches to achieve human brain-like AI. These include machine learning, spiking neural networks like SpiNNaker, neuromorphic computing, bio AI, explainable AI and general AI. Multiple AI approaches require universal supercomputers with universal processors for humanity to deliver human brain-scale AI.

Advances in the AI realm are constantly coming out, but they tend to be limited to a single domain: For instance, a cool new method for producing synthetic speech isn’t also a way to recognize expressions on human faces. Meta (AKA Facebook) researchers are working on something a little more versatile: an AI that can learn capably on its own whether it does so in spoken, written or visual materials.

The traditional way of training an AI model to correctly interpret something is to give it lots and lots (like millions) of labeled examples. A picture of a cat with the cat part labeled, a conversation with the speakers and words transcribed, etc. But that approach is no longer in vogue as researchers found that it was no longer feasible to manually create databases of the sizes needed to train next-gen AIs. Who wants to label 50 million cat pictures? Okay, a few people probably — but who wants to label 50 million pictures of common fruits and vegetables?

Currently some of the most promising AI systems are what are called self-supervised: models that can work from large quantities of unlabeled data, like books or video of people interacting, and build their own structured understanding of what the rules are of the system. For instance, by reading a thousand books it will learn the relative positions of words and ideas about grammatical structure without anyone telling it what objects or articles or commas are — it got it by drawing inferences from lots of examples.

Keeping up with the first law of robotics: a new photonic effect for accelerated drug discovery. Physicists at the University of Bath and University of Michigan demonstrate a new photonic effect in semiconducting nanohelices. A new photonic effect in semiconducting helical particles with nanos.


California has more rooftops with solar panels than any other state and continues to be a leader in new installations. It is also first in terms of the percentage of the state’s electricity coming from solar, and third for solar power capacity per capita. However, former California governor Arnold Schwarzenegger has expressed concerns that California.

For the past decade, AI has been quietly seeping into daily life, from facial recognition to digital assistants like Siri or Alexa. These largely unregulated uses of AI are highly lucrative for those who control them but are already causing real-world harms to those who are subjected to them: false arrests; health care discrimination; and a rise in pervasive surveillance that, in the case of policing, can disproportionately affect Black people and disadvantaged socioeconomic groups.

Gebru is a leading figure in a constellation of scholars, activists, regulators, and technologists collaborating to reshape ideas about what AI is and what it should be. Some of her fellow travelers remain in Big Tech, mobilizing those insights to push companies toward AI that is more ethical. Others, making policy on both sides of the Atlantic, are preparing new rules to set clearer limits on the companies benefiting most from automated abuses of power. Gebru herself is seeking to push the AI world beyond the binary of asking whether systems are biased and to instead focus on power: who’s building AI, who benefits from it, and who gets to decide what its future looks like.

Full Story:


The day after our Zoom call, on the anniversary of her departure from Google, Gebru launched the Distributed AI Research (DAIR) Institute, an independent research group she hopes will grapple with how to make AI work for everyone. “We need to let people who are harmed by technology imagine the future that they want,” she says.

And they can detach while still in motion.

Three former SpaceX engineers launched a company to develop autonomous battery-electric trains that they believe can help to improve the efficiency and emissions of railroads, a press statement reveals.

The firm, Parallel Systems, recently raised $49.55 million in Series A funds to build autonomous freight trains. The funding will go, in part, towards advanced tests for its self-driving machines.

Decarbonizing cargo transportation Railroads are a great testbed for self-driving technologies as the constrained movement of trains means there is less possibility for something to go wrong. On top of that, the transportation sector in the U.S. is the country’s largest source of greenhouse emissions, though rail is only responsible for 2 percent of total transportation emissions. Estimates by the Association of American Railroads suggest that a shift to rail and away from road transportation could reduce emissions by up to 75 percent.