Toggle light / dark theme

Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fueling our imagination about artificial humans and autonomous machines.

Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political, and healthcare domains. Our cities are becoming too big for humans to manage.

Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the WiFi often fails our 24/7 digital needs. In sum, urban life, characterized by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions—or so we are led to believe.

Read more

In the last year, the business and consumer markets alike have seen the release of advanced technologies that were once considered the stuff of science fiction. Smart gadgets that control every facet of your home, self-driving vehicles, facial and biometric identification systems and more have begun to emerge, giving us a glimpse of the high-tech reality we’re moving towards.

To find out which futuristic technologies are on the horizon, we asked a panel of YEC members the following question:

Read more

Researchers proposed implementing the residential energy scheduling algorithm by training three action dependent heuristic dynamic programming (ADHDP) networks, each one based on a weather type of sunny, partly cloudy, or cloudy. ADHDP networks are considered ‘smart,’ as their response can change based on different conditions.

“In the future, we expect to have various types of supplies to every household including the grid, windmills, and biogenerators. The issues here are the varying nature of these power sources, which do not generate electricity at a stable rate,” said Derong Liu, a professor with the School of Automation at the Guangdong University of Technology in China and an author on the paper. “For example, power generated from windmills and solar panels depends on the weather, and they vary a lot compared to the more stable power supplied by the grid. In order to improve these power sources, we need much smarter algorithms in managing/scheduling them.”

The details were published on the January 10th issue of IEEE/CAA Journal of Automatica Sinica, a joint bimonthly publication of the IEEE and the Chinese Association of Automation.

Read more

People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers.

In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.

Read more

A fierce internal debate may undermine the company’s bid for the JEDI program.

Last August, U.S. Defense Secretary James Mattis made a journey to the West Coast and met with Google founder Sergey Brin and CEO Sundar Pichai. Over a half day of meetings, Google leaders described the company’s multi-year transition to cloud computing and how it was helping them develop into a powerhouse for research and development into artificial intelligence. Brin in particular was eager to showcase how much Google was learning every day about AI and cloud implementation, according to one current and one former senior Defense Department official who spoke on condition of anonymity.

It wasn’t an overt sales pitch, exactly, say the officials. But the effect of the trip, during which Mattis also met representatives from Amazon, was transformative. He went west with deep reservations about a department-wide move to the cloud and returned to Washington, D.C., convinced that the U.S. military had to move much of its data to a commercial cloud provider — not just to manage files, email, and paperwork but to push mission-critical information to front-line operators.

Read more

We are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill… without human input.


Over the weekend, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva for the start of a week’s talks on autonomous weapons systems. Many of them fear that after gunpowder and nuclear weapons, we are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill without human input. With autonomous technology already in development in several countries, the talks mark a crucial point for governments and activists who believe the U.N. should play a key role in regulating the technology.

The meeting comes at a critical juncture. In July, Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make “shoot-no shoot” decisions. In January 2017, the U.S. Department of Defense released a video showing an autonomous drone swarm of 103 individual robots successfully flying over California. Nobody was in control of the drones; their flight paths were choreographed in real-time by an advanced algorithm. The drones “are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” a spokesman said. The drones in the video were not weaponized — but the technology to do so is rapidly evolving.

This April also marks five years since the launch of the International Campaign to Stop Killer Robots, which called for “urgent action to preemptively ban the lethal robot weapons that would be able to select and attack targets without any human intervention.” The 2013 launch letter — signed by a Nobel Peace Laureate and the directors of several NGOs — noted that they could be deployed within the next 20 years and would “give machines the power to decide who lives or dies on the battlefield.”

The industry partners will use the money to train artificially intelligent laboratory robots.

Many people assume that when robots enter the economy, they’ll snatch low-skilled jobs. But don’t let a PhD fool you — AI-powered robots will soon impact a laboratory near you.

The days of pipetting liquids around were already numbered. Companies like Transcriptic, based in Menlo Park, California, now offer automated molecular biology lab work, from routine PCR to more complicated preclinical assays. Customers can buy time on their ‘robotic cloud lab’ using any laptop and access the results in a web app.

Read more

US regulators Wednesday approved the first device that uses artificial intelligence to detect eye damage from diabetes, allowing regular doctors to diagnose the condition without interpreting any data or images.

The device, called IDx-DR, can diagnose a condition called diabetic retinopathy, the most common cause of vision loss among the more than 30 million Americans living with diabetes.

Its software uses an artificial intelligence algorithm to analyze images of the eye, taken with a retinal camera called the Topcon NW400, the FDA said.

Read more

ESA’s Mars Express orbiter is getting a major software upgrade that will extend its service life for years to come. On Sunday, the space agency uploaded the update into the veteran deep space probe’s computers where it will remain stored in memory until a scheduled restart on April 16. If successful, it will take some of the burden off the aging gyroscopes used to keep the unmanned spacecraft’s vital high-gain radio antenna pointed at Earth.

As anyone who regularly uses digital devices can tell you, software updates are a way of life. It turns out that Mars orbiting spacecraft are no exception, with aging electronics that need new instructions to deal with worn out components after years of heavy use.

Mars Express is one of the oldest still-functioning missions to the Red Planet. Launched on June 2, 2003 atop a Soyuz-FG rocket from the Baikonur Cosmodrome, the orbiter arrived at Mars on December 25 of that year. Since then, it has spent 14 years revolving about Mars taking photographs and gathering a mountain of scientific data to send back to mission control in Darmstadt, Germany.

Read more