Toggle light / dark theme

Mock seemed pleased with the outcome. “You could look at this and say, ‘O.K., the A.I. got five, our human got zero,’” he told viewers. “From the fighter-pilot world, we trust what works, and what we saw was that in this limited area, this specific scenario, we’ve got A.I. that works.” (A YouTube video of the trials has since garnered half a million views.)

Brett Darcey, who runs Heron, told me that the company has used Falco to fly drones, completing seventy-four flights with zero crashes. But it’s still unclear how the technology will react to the infinite possibilities of real-world conditions. The human mind processes more slowly than a computer, but it has the cognitive flexibility to adapt to unimagined circumstances; artificial intelligence, so far, does not. Anna Skinner, a human-factors psychologist, and another science adviser to the ACE program, told me, “Humans are able to draw on their experience and take reasonable actions in the face of uncertainty. And, especially in a combat situation, uncertainty is always going to be present.”

What’s next? Human brain-scale AI.

Funded by the Slovakian government using funds allocated by the EU, the I4DI consortium is behind the initiative to build a 64 AI exaflop machine (that’s 64 billion, billion AI operations per second) on our platform by the end of 2022. This will enable Slovakia and the EU to deliver for the first time in the history of humanity a human brain-scale AI supercomputer. Meanwhile, almost a dozen other countries are watching this project closely, with interest in replicating this supercomputer in their own countries.

There are multiple approaches to achieve human brain-like AI. These include machine learning, spiking neural networks like SpiNNaker, neuromorphic computing, bio AI, explainable AI and general AI. Multiple AI approaches require universal supercomputers with universal processors for humanity to deliver human brain-scale AI.

Advances in the AI realm are constantly coming out, but they tend to be limited to a single domain: For instance, a cool new method for producing synthetic speech isn’t also a way to recognize expressions on human faces. Meta (AKA Facebook) researchers are working on something a little more versatile: an AI that can learn capably on its own whether it does so in spoken, written or visual materials.

The traditional way of training an AI model to correctly interpret something is to give it lots and lots (like millions) of labeled examples. A picture of a cat with the cat part labeled, a conversation with the speakers and words transcribed, etc. But that approach is no longer in vogue as researchers found that it was no longer feasible to manually create databases of the sizes needed to train next-gen AIs. Who wants to label 50 million cat pictures? Okay, a few people probably — but who wants to label 50 million pictures of common fruits and vegetables?

Currently some of the most promising AI systems are what are called self-supervised: models that can work from large quantities of unlabeled data, like books or video of people interacting, and build their own structured understanding of what the rules are of the system. For instance, by reading a thousand books it will learn the relative positions of words and ideas about grammatical structure without anyone telling it what objects or articles or commas are — it got it by drawing inferences from lots of examples.

Keeping up with the first law of robotics: a new photonic effect for accelerated drug discovery. Physicists at the University of Bath and University of Michigan demonstrate a new photonic effect in semiconducting nanohelices. A new photonic effect in semiconducting helical particles with nanos.


California has more rooftops with solar panels than any other state and continues to be a leader in new installations. It is also first in terms of the percentage of the state’s electricity coming from solar, and third for solar power capacity per capita. However, former California governor Arnold Schwarzenegger has expressed concerns that California.

For the past decade, AI has been quietly seeping into daily life, from facial recognition to digital assistants like Siri or Alexa. These largely unregulated uses of AI are highly lucrative for those who control them but are already causing real-world harms to those who are subjected to them: false arrests; health care discrimination; and a rise in pervasive surveillance that, in the case of policing, can disproportionately affect Black people and disadvantaged socioeconomic groups.

Gebru is a leading figure in a constellation of scholars, activists, regulators, and technologists collaborating to reshape ideas about what AI is and what it should be. Some of her fellow travelers remain in Big Tech, mobilizing those insights to push companies toward AI that is more ethical. Others, making policy on both sides of the Atlantic, are preparing new rules to set clearer limits on the companies benefiting most from automated abuses of power. Gebru herself is seeking to push the AI world beyond the binary of asking whether systems are biased and to instead focus on power: who’s building AI, who benefits from it, and who gets to decide what its future looks like.

Full Story:


The day after our Zoom call, on the anniversary of her departure from Google, Gebru launched the Distributed AI Research (DAIR) Institute, an independent research group she hopes will grapple with how to make AI work for everyone. “We need to let people who are harmed by technology imagine the future that they want,” she says.

And they can detach while still in motion.

Three former SpaceX engineers launched a company to develop autonomous battery-electric trains that they believe can help to improve the efficiency and emissions of railroads, a press statement reveals.

The firm, Parallel Systems, recently raised $49.55 million in Series A funds to build autonomous freight trains. The funding will go, in part, towards advanced tests for its self-driving machines.

Decarbonizing cargo transportation Railroads are a great testbed for self-driving technologies as the constrained movement of trains means there is less possibility for something to go wrong. On top of that, the transportation sector in the U.S. is the country’s largest source of greenhouse emissions, though rail is only responsible for 2 percent of total transportation emissions. Estimates by the Association of American Railroads suggest that a shift to rail and away from road transportation could reduce emissions by up to 75 percent.

“No AI technology ‘where training or transactional data is known to be of poor quality, carry bias, or where the quality of such data is unknown’ should ever be considered for use, and thus should be deemed Extreme Risk, not High Risk. Any AI technology based on poor quality or biased data is inherently compromised.”

“No AI technology that assists in “identifying, categorizing, prioritizing or otherwise making decisions pertaining to members of the public” should be deemed Low Risk. Automating such actions through technology, even with the inclusion of a human-in-the-loop, is an intrinsically risky activity, and should be categorized as such by the Policy.”

Full Story:


AI technologies are impacting our everyday lives. The ethical risks of AI mean we should think beyond the barebones of algorithmic fairness and bias in order to identify the full range of effects of AI technologies on safety, privacy and society at large.

Luminar, a laser lidar startup led by one of the youngest U.S. billionaires, has a new partnership with Mercedes-Benz that includes supplying sensors for its luxury vehicles and gathering on-road data from them to improve automated driving. The German carmaker also bought a small stake in the tech company.

Luminar’s Iris lidar will be integrated into future Mercedes planned for its next-generation platform to improve safety and help them operate autonomously during highway driving, the companies said. Details including specific models that will use the sensor and when they’ll be available for sale to customers aren’t being disclosed. Mercedes also acquired 1.5 million Luminar shares as part of the partnership, founder and CEO Austin Russell, 26, tells Forbes.

Full Story:


The luxury automaker is buying 1.5 million shares of the laser lidar startup that’s led by one of the youngest U.S. billionaires.

A string of Chinese video platforms are accelerating moves toward producing high-quality, 8K ultrahigh definition content by integrating 5G, artificial intelligence and virtual reality technologies.

It’s an important step toward moving 8K video into people’s living rooms, experts said.

Chinese UHD video production and distribution platform Sikai Garden Network Technology Co Ltd, also known as 4K Garden, plans to send UHD content to different terminal devices, including televisions, outdoor 8K light-emitting diode screens and VR headsets, and to explore diversified and innovative applications for the UHD industry, said Wu Yi, chairman of 4K Garden.

I don’t know how about you… But I’m meeting cyborgs in the streets regularly. If you observe carefully you can find people with artificial legs and arms. So next time watch more carefully. Its most common seen artificial body part. On other hand there are other parts you can’t see, like artificial joints, dental implants, breast implants, pacemakers, insulin pumps and so on. We are unable to see them but they are very common. Millions people use them. Nowadays very common trend is biohacking where people implant magnets and chips to their bodies. We think our bodies are born complete but we are wrong. We can upgrade and modify them. What if we can use brain implants to be smarter, to think and focus sharper.

First real cyborg I have met was Prof. Kevin Warwick. We met in Pilsen at conference about artificial intelligence. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done research concerning robotics.