Toggle light / dark theme

“Within five years, I have no doubt there will be robots in every Army formation.”


From the spears hurled by Romans to the missiles launched by fighter pilots, the weapons humans use to kill each other have always been subject to improvement. Militaries seek to make each one ever-more lethal and, in doing so, better protect the soldier who wields it. But in the next evolution of combat, the U.S. Army is heading down a path that may lead humans off the battlefield entirely.

Over the next few years, the Pentagon is poised to spend almost $1 billion for a range of robots designed to complement combat troops. Beyond scouting and explosives disposal, these new machines will sniff out hazardous chemicals or other agents, perform complex reconnaissance and even carry a soldier’s gear.

More from Bloomberg.com: China Casts Doubt on Report of $200 Billion Trade Deficit Offer.

Read more

Aurora Flight Services’ Autonomous Aerial Cargo Utility System (AACUS) took another step forward as an AACUS-enabled UH-1H helicopter autonomously delivered 520 lb (236 kg) of water, gasoline, MREs, communications gear, and a cooler capable of carrying urgent supplies such as blood to US Marines in the field.

Last week’s demonstration at the Marine Corps Air Ground Combat Center Twentynine Palms in California was the first ever autonomous point-to-point cargo resupply mission to Marines and was carried out as part of an Integrated Training Exercise. The completion of what has been billed as the system’s first closed-loop mission involved the modified helicopter carrying out a full cargo resupply operation that included takeoff and landing with minimal human intervention.

Developed as part of a US$98-million project by the US Office of Naval Research (ONR), AACUS is an autonomous flight system that can be retrofitted to existing helicopters to make them pilot optional. The purpose of AACUS is to provide the US armed forces with logistical support in the field with a minimum of hazard to human crews.

Read more

We propose a method that can generate soft segments, i.e. layers that represent the semantically meaningful regions as well as the soft transitions between them, automatically by fusing high-level and low-level image features in a single graph structure. The semantic soft segments, visualized by assigning each segment a solid color, can be used as masks for targeted image editing tasks, or selected layers can be used for compositing after layer color estimation.

Abstract

Accurate representation of soft transitions between image regions is essential for high-quality image editing and compositing. Current techniques for generating such representations depend heavily on interaction by a skilled visual artist, as creating such accurate object selections is a tedious task. In this work, we introduce semantic soft segments, a set of layers that correspond to semantically meaningful regions in an image with accurate soft transitions between different objects. We approach this problem from a spectral segmentation angle and propose a graph structure that embeds texture and color features from the image as well as higher-level semantic information generated by a neural network. The soft segments are generated via eigendecomposition of the carefully constructed Laplacian matrix fully automatically. We demonstrate that otherwise complex image editing tasks can be done with little effort using semantic soft segments.

Read more

Insect-sized flying robots could help with time-consuming tasks like surveying crop growth on large farms or sniffing out gas leaks. These robots soar by fluttering tiny wings because they are too small to use propellers, like those seen on their larger drone cousins. Small size is advantageous: These robots are cheap to make and can easily slip into tight places that are inaccessible to big drones.

But current flying robo-insects are still tethered to the ground. The electronics they need to power and control their wings are too heavy for these miniature robots to carry.

Now, engineers at the University of Washington have for the first time cut the cord and added a brain, allowing their RoboFly to take its first independent flaps. This might be one small flap for a robot, but it’s one giant leap for robot-kind. The team will present its findings May 23 at the International Conference on Robotics and Automation in Brisbane, Australia.

Read more

There’s always a lot of talk about how AI will steal all our jobs and how machines will bring about the collapse of employment as we know it. It’s certainly hard to blame people for worrying with all the negative press around the issue.

But the reality is that AI is completely dependent on humans, and it appears as if it will stay that way for the foreseeable future. In fact, as AI grows as an industry and machine learning becomes more widely used, this will actually create a whole host of new jobs for people.

Let’s take a look at some of the roles humans currently play in the AI industry and the kind of jobs that will continue to be important in the future.

Read more

The technical skills of programmer John Carmack helped create the 3D world of Doom, the first-person shooter that took over the world 25 years ago. But it was level designers like John Romero and American McGee that made the game fun to play. Level designers that, today, might find their jobs threatened by the ever-growing capabilities of artificial intelligence.

One of the many reasons Doom became so incredibly popular was that id Software made tools available that let anyone create their own levels for the game, resulting in thousands of free ways to add to its replay value. First-person 3D games and their level design have advanced by leaps and bounds since the original Doom’s release, but the sheer volume of user-created content made it the ideal game for training an AI to create its own levels.

Researchers at the Politecnico di Milano university in Italy created a generative adversarial network for the task, which essentially uses two artificially intelligent algorithms working against each other to optimise the overall results. One algorithm was fed thousands of Doom levels which it analysed for criteria like overall size, enemy placement, and the number of rooms. It then used what it learned to generate its own original Doom levels.

Read more

Without saying anything this device will let you talk to your computer — https://www.weforum.org/…/computer-system-transcribes-words…


MIT researchers have developed a computer interface that can transcribe words that the user concentrates on verbalizing but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

Read more

Government officials, business leaders and academics attending China’s second World Intelligence Congress, abbreviated WIC 2018, envisioned people’s liberation from labor with the help of artificial intelligence.

With the theme “The Age of Intelligence: New Progress, New Trends, New Efforts,” the three-day event began in north China’s Tianjin municipality on Wednesday.

Lin Nianxiu, deputy director of China’s National Development and Reform Commission (NDRC), said at the opening of the congress that the aspirations to make machines more intelligent and liberate human beings from as much labor as possible have been major impetuses driving worldwide technological advances and industrial innovation.

Read more