Toggle light / dark theme

If AI is going to help us in a crisis, we need a new kind of ethics

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.

Tesla Autopilot makes automatic lane change to avoid construction zone

Tesla’s Full Self-Driving suite continues to improve with a recent video showing a Model 3 safely shifting away from a makeshift lane of construction cones while using Navigate on Autopilot.

Tesla owner-enthusiast Jeremy Greenlee was traveling through a highway construction zone in his Model 3. The zone contained a makeshift lane to the vehicle’s left that was made up of construction cones.

In an attempt to avoid the possibility of any collision with the cones from taking place, the vehicle utilized the driver-assist system and automatically shifted one lane to the right. This maneuver successfully removed any risk of coming into contact with the dense construction cones that were to the left of the car, which could have caused hundreds of dollars in cosmetic damage to the vehicle.

AI researchers condemn predictive crime software, citing racial bias and flawed methods

A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.

The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Harrisburg University professor and co-author Nathaniel J.S. Ashby said.

Reverse-Engineering of Human Brain Likely by 2030, Expert Predicts

Circa 2010


Updated at 18:30 EST to correct timeline of prediction to 2030 from 2020 Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near. It would be the first step toward creating machines \[…\].

/* */