Toggle light / dark theme

Deepfakes are the most concerning use of AI for crime and terrorism, according to a new report from University College London.

The research team first identified 20 different ways AI could be used by criminals over the next 15 years. They then asked 31 AI experts to rank them by risk, based on their potential for harm, the money they could make, their ease of use, and how hard they are to stop.

Deepfakes — AI-generated videos of real people doing and saying fictional things — earned the top spot for two major reasons. Firstly, they’re hard to identify and prevent. Automated detection methods remain unreliable and deepfakes also getting better at fooling human eyes. A recent Facebook competition to detect them with algorithms led researchers to admit it’s “very much an unsolved problem.”

Swarm intelligence (SI) is concerned with the collective behaviour that emerges from decentralised self-organising systems, whilst swarm robotics (SR) is an approach to the self-coordination of large numbers of simple robots which emerged as the application of SI to multi-robot systems. Given the increasing severity and frequency of occurrence of wildfires and the hazardous nature of fighting their propagation, the use of disposable inexpensive robots in place of humans is of special interest. This paper demonstrates the feasibility and potential of employing SR to fight fires autonomously, with a focus on the self-coordination mechanisms for the desired firefighting behaviour to emerge. Thus, an efficient physics-based model of fire propagation and a self-organisation algorithm for swarms of firefighting drones are developed and coupled, with the collaborative behaviour based on a particle swarm algorithm adapted to individuals operating within physical dynamic environments of high severity and frequency of change. Numerical experiments demonstrate that the proposed self-organising system is effective, scalable and fault-tolerant, comprising a promising approach to dealing with the suppression of wildfires – one of the world’s most pressing challenges of our time.

Drone Waiters-Boss Magazine
According to Forbes, payroll costs consume up to 25 per cent of a restaurant’s profit. Restaurateurs in Sydney and other parts of Australia hope to combat that expense by following in the footsteps of venues in Asia that have used drone waiters instead of human wait staff.

Faster and Human-Free Waiter drones are robotic devices that soar through the air with platters of food and glasses of beverages perched on top. Customers place their orders via electronic devices or other means, then the kitchen sends out their food on trays carried by machines rather than humans. Each drone can carry up to 4.4 pounds of cargo.

Sensors on the sides of the drones prevent them from crashing into objects or people as they navigate busy restaurants. While this strategy eliminates the human element that many experts believe is essential to the hospitality industry, the waiter drones’ success in Asia suggests they might prove a valuable contribution to restaurants in Australia.

In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.

The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams—about one-tenth the weight of a playing card—the team mounted it on top of live beetles and insect-sized robots.

The results will be published July 15 in Science Robotics.

face_with_colon_three yay closer to foglet bodies: 3.


Is the T-1000 no longer science fiction?

It is a human dream to realize a robot with automatic mechanical functions similar to the robots presented in several science-fiction movies and series such as “Ex Machina”, “Black Mirror”, “The Terminator”, etc.

More specifically, the idea of a liquid-metal-based robot able to transform its structure from solid to liquid, slip through narrow channels, and self-repair from any physical damage has always fascinated the scientific community engaged in cutting-edge technological discoveries. Beside the science-fiction background, micromachines able to gain energy from chemical reactions are attracting lots of attention as they emerged as ideal candidates for microrobots used in the field of microfabrication, detection/sensing, and personalized drug delivery.

The presence of outliers in a classification or regression dataset can result in a poor fit and lower predictive modeling performance.

Identifying and removing outliers is challenging with simple statistical methods for most machine learning datasets given the large number of input variables. Instead, automatic outlier detection methods can be used in the modeling pipeline and compared, just like other data preparation transforms that may be applied to the dataset.

In this tutorial, you will discover how to use automatic outlier detection and removal to improve machine learning predictive modeling performance.

A team of computer scientists has developed a new AI that can write code and predict software solutions for programmers navigating through numerous application programming interfaces (APIs).

For years, research scientists have been studying how programs can generate instant feedback that coders can address immediately. A wide range of applications has already been created, all of which aim to detect faulty or questionable lines of code. While this has only been minimally integrated into most developers’ software tools, a team of computer scientists from Rice University has recently figured out a way for developers and programmers to receive feedback on their code while suggesting solutions for their programs—all through artificial intelligence (AI).

It’s no secret that healthcare costs have risen faster than inflation for decades. Some experts estimate that healthcare will account for over 20% of the US GDP by 2025. Meanwhile, doctors are working harder than ever before to treat patients as the U.S. physician shortage continues to grow. Many medical professionals have their schedules packed so tightly that much of the human element which motivated their pursuit of medicine in the first place is reduced.

In healthcare, artificial intelligence (AI) can seem intimidating. At the birthday party of a radiologist friend, she gently expressed how she felt her job would be threatened by AI in the coming decade. Yet, for most of the medical profession, AI will be an accelerant and enabler, not a threat. It would be good business for AI companies as well to help, rather than attempt to replace, medical professionals.

In a previous article, I expressed three ways in which I consistently see AI adding value: speed, cost and accuracy. In healthcare, it’s no different. Here are three examples of how AI will change healthcare.