The chairs were filled not with Gerard’s fellow Google employees but, instead, more than 100 engineers from about a dozen big privately held companies that Google’s Alphabet had invested in.
As it battles to stand out in late-stage investing, Alphabet’s CapitalG is throwing a new machine learning marathon for its portfolio companies.
Machine-learning technology is growing ever more accessible. Let’s not have a 9/11-style ‘failure of imagination’ about it.
There is a general tendency among counterterrorism analysts to understate rather than hyperbolize terrorists’ technological adaptations. In 2011 and 2012, most believed that the “Arab Spring” revolutions would marginalize jihadist movements. But within four years, jihadists had attracted a record number of foreign fighters to the Syrian battlefield, in part by using the same social media mobilization techniques that protesters had employed to challenge dictators like Zine El Abidine Ben Ali, Hosni Mubarak, and Muammar Qaddafi.
Militant groups later combined easy accessibility to operatives via social media with new advances in encryption to create the “virtual planner” model of terrorism. This model allows online operatives to provide the same offerings that were once the domain of physical networks, including recruitment, coordinating the target and timing of attacks, and even providing technical assistance on topics like bomb-making.
Artificial intelligence and automation stand poised to put millions out of work and make inequality even more pronounced. Is it possible to solve one problem with another?
“Within five years, I have no doubt there will be robots in every Army formation.”
From the spears hurled by Romans to the missiles launched by fighter pilots, the weapons humans use to kill each other have always been subject to improvement. Militaries seek to make each one ever-more lethal and, in doing so, better protect the soldier who wields it. But in the next evolution of combat, the U.S. Army is heading down a path that may lead humans off the battlefield entirely.
Over the next few years, the Pentagon is poised to spend almost $1 billion for a range of robots designed to complement combat troops. Beyond scouting and explosives disposal, these new machines will sniff out hazardous chemicals or other agents, perform complex reconnaissance and even carry a soldier’s gear.
Why do artificial intelligent algorithms sound like a human? Why do AI robots look human? This article looks at why AI algorithms and robots are created to look and sound like a human.
Aurora Flight Services’ Autonomous Aerial Cargo Utility System (AACUS) took another step forward as an AACUS-enabled UH-1H helicopter autonomously delivered 520 lb (236 kg) of water, gasoline, MREs, communications gear, and a cooler capable of carrying urgent supplies such as blood to US Marines in the field.
Last week’s demonstration at the Marine Corps Air Ground Combat Center Twentynine Palms in California was the first ever autonomous point-to-point cargo resupply mission to Marines and was carried out as part of an Integrated Training Exercise. The completion of what has been billed as the system’s first closed-loop mission involved the modified helicopter carrying out a full cargo resupply operation that included takeoff and landing with minimal human intervention.
Developed as part of a US$98-million project by the US Office of Naval Research (ONR), AACUS is an autonomous flight system that can be retrofitted to existing helicopters to make them pilot optional. The purpose of AACUS is to provide the US armed forces with logistical support in the field with a minimum of hazard to human crews.
We propose a method that can generate soft segments, i.e. layers that represent the semantically meaningful regions as well as the soft transitions between them, automatically by fusing high-level and low-level image features in a single graph structure. The semantic soft segments, visualized by assigning each segment a solid color, can be used as masks for targeted image editing tasks, or selected layers can be used for compositing after layer color estimation.
Abstract
Accurate representation of soft transitions between image regions is essential for high-quality image editing and compositing. Current techniques for generating such representations depend heavily on interaction by a skilled visual artist, as creating such accurate object selections is a tedious task. In this work, we introduce semantic soft segments, a set of layers that correspond to semantically meaningful regions in an image with accurate soft transitions between different objects. We approach this problem from a spectral segmentation angle and propose a graph structure that embeds texture and color features from the image as well as higher-level semantic information generated by a neural network. The soft segments are generated via eigendecomposition of the carefully constructed Laplacian matrix fully automatically. We demonstrate that otherwise complex image editing tasks can be done with little effort using semantic soft segments.
Insect-sized flying robots could help with time-consuming tasks like surveying crop growth on large farms or sniffing out gas leaks. These robots soar by fluttering tiny wings because they are too small to use propellers, like those seen on their larger drone cousins. Small size is advantageous: These robots are cheap to make and can easily slip into tight places that are inaccessible to big drones.
But current flying robo-insects are still tethered to the ground. The electronics they need to power and control their wings are too heavy for these miniature robots to carry.
Now, engineers at the University of Washington have for the first time cut the cord and added a brain, allowing their RoboFly to take its first independent flaps. This might be one small flap for a robot, but it’s one giant leap for robot-kind. The team will present its findings May 23 at the International Conference on Robotics and Automation in Brisbane, Australia.
There’s always a lot of talk about how AI will steal all our jobs and how machines will bring about the collapse of employment as we know it. It’s certainly hard to blame people for worrying with all the negative press around the issue.
But the reality is that AI is completely dependent on humans, and it appears as if it will stay that way for the foreseeable future. In fact, as AI grows as an industry and machine learning becomes more widely used, this will actually create a whole host of new jobs for people.
Let’s take a look at some of the roles humans currently play in the AI industry and the kind of jobs that will continue to be important in the future.
The technical skills of programmer John Carmack helped create the 3D world of Doom, the first-person shooter that took over the world 25 years ago. But it was level designers like John Romero and American McGee that made the game fun to play. Level designers that, today, might find their jobs threatened by the ever-growing capabilities of artificial intelligence.
One of the many reasons Doom became so incredibly popular was that id Software made tools available that let anyone create their own levels for the game, resulting in thousands of free ways to add to its replay value. First-person 3D games and their level design have advanced by leaps and bounds since the original Doom’s release, but the sheer volume of user-created content made it the ideal game for training an AI to create its own levels.
Researchers at the Politecnico di Milano university in Italy created a generative adversarial network for the task, which essentially uses two artificially intelligent algorithms working against each other to optimise the overall results. One algorithm was fed thousands of Doom levels which it analysed for criteria like overall size, enemy placement, and the number of rooms. It then used what it learned to generate its own original Doom levels.