Toggle light / dark theme

BAE Systems’ new drone-hunting missiles can take down unmanned aerial systems

The experiments were done to prove the effectiveness of 70mm rockets.

BAE Systems has tested its latest drone hunting missiles machine by conducting ground-to-air test firings, according to a press release by the company published on Tuesday.

Rockets fired from a containerized weapon system.


BAE Systems.

The experiments were done to prove the effectiveness of 70mm rockets guided by APKWS guidance kits against Class-2 unmanned aerial systems (UAS) that weigh roughly 25–50 pounds and can travel at speeds exceeding 100 miles per hour.

I Interviewed An AI About The Ethics Of AI

ChatGPT is remarkable. It’s a new AI model from OpenAI that’s designed to chat in a conversational manner. It’s also a liar. Stuck for ideas on what to talk to a machine about, I decided to interview ChatGPT about the ethics of AI. Would it have the level of self-awareness to be honest about its own dangers? Would it even be willing to answer questions on how it behaves?

Yes, it would. And while ChatGPT started off by being commendably upfront about the ethics of what it does, it eventually descended into telling outright lies. It even issued a non-apology for doing so.


An interview with the cutting-edge chatbot, ChatGPT, ends in a little white lie.

Mastering Stratego, the classic game of imperfect information

Game-playing artificial intelligence (AI) systems have advanced to a new frontier. Stratego, the classic board game that’s more complex than chess and Go, and craftier than poker, has now been mastered. Published in Science, we present DeepNash, an AI agent that learned the game from scratch to a human expert level by playing against itself.

DeepNash uses a novel approach, based on game theory and model-free deep reinforcement learning. Its play style converges to a Nash equilibrium, which means its play is very hard for an opponent to exploit. So hard, in fact, that DeepNash has reached an all-time top-three ranking among human experts on the world’s biggest online Stratego platform, Gravon.

Board games have historically been a measure of progress in the field of AI, allowing us to study how humans and machines develop and execute strategies in a controlled environment. Unlike chess and Go, Stratego is a game of imperfect information: players cannot directly observe the identities of their opponent’s pieces.

This Artificial Intelligence Paper Presents an Advanced Method for Differential Privacy in Image Recognition with Better Accuracy

Machine learning has increased considerably in several areas due to its performance in recent years. Thanks to modern computers’ computing capacity and graphics cards, deep learning has made it possible to achieve results that sometimes exceed those experts give. However, its use in sensitive areas such as medicine or finance causes confidentiality issues. A formal privacy guarantee called differential privacy (DP) prohibits adversaries with access to machine learning models from obtaining data on specific training points. The most common training approach for differential privacy in image recognition is differential private stochastic gradient descent (DPSGD). However, the deployment of differential privacy is limited by the performance deterioration caused by current DPSGD systems.

The existing methods for differentially private deep learning still need to operate better since that, in the stochastic gradient descent process, these techniques allow all model updates regardless of whether the corresponding objective function values get better. In some model updates, adding noise to the gradients might worsen the objective function values, especially when convergence is imminent. The resulting models get worse as a result of these effects. The optimization target degrades, and the privacy budget is wasted. To address this problem, a research team from Shanghai University in China suggests a simulated annealing-based differentially private stochastic gradient descent (SA-DPSGD) approach that accepts a candidate update with a probability that depends on the quality of the update and the number of iterations.

Concretely, the model update is accepted if it gives a better objective function value. Otherwise, the update is rejected with a certain probability. To prevent settling into a local optimum, the authors suggest using probabilistic rejections rather than deterministic ones and limiting the number of continuous rejections. Therefore, the simulated annealing algorithm is used to select model updates with probability during the stochastic gradient descent process.