Toggle light / dark theme

Community of ethical hackers needed to prevent AI’s looming ‘crisis of trust’, experts argue

The Artificial Intelligence industry should create a global community of hackers and “threat modelers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.

This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Center for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.

They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties”—paying out rewards for revealing ethical flaws—to prove their integrity before releasing AI for use on the wider public.

How neural networks simulate symbolic reasoning

Researchers at the University of Texas have discovered a new way for neural networks to simulate symbolic reasoning. This discovery sparks an exciting path toward uniting deep learning and symbolic reasoning AI.

In the new approach, each neuron has a specialized function that relates to specific concepts. “It opens the black box of standard deep learning models while also being able to handle more complex problems than what symbolic AI has typically handled,” Paul Blazek, University of Texas Southwestern Medical Center researcher and one of the authors of the Nature paper, told VentureBeat.

This work complements previous research on neurosymbolic methods such as MIT’s Clevrer, which has shown some promise in predicting and explaining counterfactual possibilities more effectively than neural networks. Additionally, DeepMind researchers previously elaborated on another neural network approach that outperformed state-of-the-art neurosymbolic approaches.

Autonomous robot for concentrated solar power installation and maintenance

Heliogen announced the roll-out of its robots to install and clean its CSP plants.


Heliogen, a California-based developer of concentrated solar power (CSP) plants, held the first technical demonstration of its ICARUS, or Installation & Cleaning Autonomous Robot & Utility Solution.

ICARUS is a system of autonomous robots designed to clean the heliostats, which are the reflective mirrors of the CSP system. Heliostats reflect sunlight into a collection tower, where the light and heat is converted to electricity and usable thermal energy. Recently, the company partnered with Bloom Energy to produce hydrogen fuel.

China could surpass US in core 21st Century technologies within a decade: Harvard

They have cheap labour.


With China aggressively expanding in various fields like artificial intelligence, 5G networking, semiconductors, and more, it appears that the nation will be overtaking the US in 21st century technologies within just a decade.

The news arrives from a recent report from Harvard, which is titled “The Great Rivalry: China vs the US in the 21st Century” that was published by the Belfer Centre for Science and International Affairs at the Harvard Kennedy School in Cambridge, Massachusetts. The report adds that “China’s rapid rise to challenge US dominance of technology’s commanding heights has captured America’s attention.” It further adds that in some areas China “has already become No 1. In others, on current trajectories, it will overtake the US within the next decade.”

Carrier Bush prepares for additional Stingray testing

https://buff.ly/3yezAQJ #UAV #Defence #OSINT


Technicians plan to conduct deck-handling testing of the MQ-25 Stingray on the aircraft carrier USS George HW Bush (CVN 77) while the ship is underway in December. (Michael Fabey)

Technicians aboard the aircraft carrier USS George HW Bush (CVN 77) were preparing equipment for at-sea deck-handling testing of the Boeing MQ-25 Stingray unmanned aerial vehicle (UAV) on 7 December, according to a media briefing provided that day in the ship’s hangar bay.

DeepMind Says Its New AI Has Almost the Reading Comprehension of a High Schooler

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

It says the language model, called Gopher, was able to significantly improve its reading comprehension by ingesting massive repositories of texts online.

DeepMind boasts that its algorithm, an “ultra-large language model,” has 280 billion parameters, which are a measure of size and complexity. That means it falls somewhere between OpenAI’s GPT-3 (175 billion parameters) and Microsoft and NVIDIA’s Megatron, which features 530 billion parameters, The Verge points out.

/* */