Toggle light / dark theme

Researchers at MIT, who last year designed a tiny computer chip tailored to help honeybee-sized drones navigate, have now shrunk their chip design even further, in both size and power consumption.

The team, co-led by Vivienne Sze, associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), and Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics, built a fully customized from the ground up, with a focus on reducing and size while also increasing processing speed.

The new computer chip, named “Navion,” which they are presenting this week at the Symposia on VLSI Technology and Circuits, is just 20 square millimeters—about the size of a LEGO minifigure’s footprint—and consumes just 24 milliwatts of , or about one-thousandth the energy required to power a lightbulb.

Read more

Infrared cameras are the heat-sensing eyes that help drones find their targets, even in the dead of night or through heavy fog.

Hiding from such detectors could become much easier, thanks to a new cloaking material that renders objects—and people—practically invisible.

“What we have shown is an ultrathin stealth ‘sheet.’ Right now, what people have is much heavier metal armor or thermal blankets,” says Hongrui Jiang, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor of electrical and computer engineering at the University of Wisconsin-Madison.

Read more

A space launch every 3 hours may soon be possible using rockets carried on a fully autonomous unmanned airplane, a new startup company suggests.

Alabama-based startup Aevum aims to per mission, using an air-launch system called Ravn.

“Ravn is designed to launch every 180 minutes,” Jay Skylus, Aevum’s CEO and chief launch architect, told Space.com. “Other launch vehicles fly only a handful of times a year with an average of 18 months of lead time.” [Rocket Launches: The Latest Liftoffs, Photos & Videos].

Read more

Autonomous deliveries and drones

UPS execs insist that the UPS driver is a core element to its success and the face of the company, but they have tested the use of drone deliveries for some applications including dropping essential supplies in Rwanda and demonstrating how medicine could be delivered to islands. In rural areas, where drones have open air to execute deliveries and the distance between stops makes it challenging for the drivers to be efficient, drones launched from the roofs of UPS trucks offer a solid solution to cut costs and improve service. Drones could also be deployed in UPS sorting facilities and warehouses to get items on high shelves or in remote areas.

The technology used by UPS generates a cache of data that opens up even more opportunities to become more efficient, improve the customer experience, innovate delivery solutions, and more. From optimizing the UPS network to driving operational improvements, big data and artificial intelligence are at the core of UPS’s business performance.

Read more

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

Read more