Toggle light / dark theme

For years, Brent Hecht, an associate professor at Northwestern University who studies AI ethics, felt like a voice crying in the wilderness. When he entered the field in 2008, “I recall just agonizing about how to get people to understand and be interested and get a sense of how powerful some of the risks [of AI research] could be,” he says.

To be sure, Hecht wasn’t—and isn’t—the only academic studying the societal impacts of AI. But the group is small. “In terms of responsible AI, it is a sideshow for most institutions,” Hecht says. But in the past few years, that has begun to change. The urgency of AI’s ethical reckoning has only increased since Minneapolis police killed George Floyd, shining a light on AI’s role in discriminatory police surveillance.

This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

But as millions of animals continue to be used in biomedical research each year, and new legislation calls on federal agencies to reduce and justify their animal use, some have begun to argue that it’s time to replace the three Rs themselves. “It was an important advance in animal research ethics, but it’s no longer enough,” Tom Beauchamp told attendees last week at a lab animal conference.


Science talks with two experts in animal ethics who want to go beyond the three Rs.

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.

By Valentina Lagomarsino figures by Sean Wilson

Nearly four months ago, Chinese researcher He Jiankui announced that he had edited the genes of twin babies with CRISPR. CRISPR, also known as CRISPR/Cas9, can be thought of as “genetic scissors” that can be programmed to edit DNA in any cell. Last year, scientists used CRISPR to cure dogs of Duchenne muscular dystrophy. This was a huge step forward for gene therapies, as the potential of CRISPR to treat otherwise incurable diseases seemed possible. However, a global community of scientists believe it is premature to use CRISPR in human babies because of inadequate scientific review and a lack of international consensus regarding the ethics of when and how this technology should be used.

Early regulation of gene-editing technology.

What does this have to do with AI self-driving cars?

AI Self-Driving Cars Will Need to Make Life-or-Death Judgements

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.

What police would do with the information has yet to be determined. The head of WMP told New Scientist they won’t be preemptively arresting anyone; instead, the idea would be to use the information to provide early intervention from social or health workers to help keep potential offenders on the straight and narrow or protect potential victims.

But data ethics experts have voiced concerns that the police are stepping into an ethical minefield they may not be fully prepared for. Last year, WMP asked researchers at the Alan Turing Institute’s Data Ethics Group to assess a redacted version of the proposal, and last week they released an ethics advisory in conjunction with the Independent Digital Ethics Panel for Policing.

While the authors applaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.”

Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.

Humans having difficult conversations said they trusted artificially —the “smart” reply suggestions in texts—more than the people they were talking to, according to a new study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the system,” said Jess Hohenstein, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”

Dr. Ezekiel Emanuel, an American oncologist and bioethicist who is senior fellow at the Center for American Progress as well as Vice Provost for Global Initiatives at the University of Pennsylvania and chair of the Department of Medical Ethics and Health Policy, said on MSNBC on Friday, March 20, that Tesla and SpaceX CEO Elon Musk told him it would probably take 8–10 weeks to get ventilator production started at his factories (he’s working on this at Tesla and SpaceX).

I reached out to Musk for clarification on that topic and he replied that, “We have 250k N95 masks. Aiming to start distributing those to hospitals tomorrow night. Should have over 1000 ventilators by next week.” With medical supplies such as these being one of the biggest bottlenecks and challenges at the moment in the COVID-19 response in the United States (as well as elsewhere) — something that is already having a very real effect on medical professionals and patient care — the support will surely be received with much gratitude. That said, while there has been much attention put on the expected future need for ventilators, very few places reportedly have a shortage of them right now. In much greater need at the moment are simpler supplies like N95 masks, which must be why Tesla/SpaceX is providing 250,000 of them.

Dr. Emanuel also said in the segment of MSNBC’s “Morning Joe” he was on that we probably need 8–12 weeks (2–3 months) of social distancing in the US in order to deal with COVID-19 as a society. However, he also expects that the virus will come back and we’ll basically have a roller coaster of “social restrictions, easing up, social restrictions, easing up … to try to smooth out the demand on the health care system.”