Toggle light / dark theme

IEEE’s new standards for ethically aligned AI — it’s a start focuses a lot on building ethics/ Morales into AI and not promote the building of autonomous AI Weapons, etc. However, without government & laws on the books this set of standards are a feel good document at best. When it gets into morals, values, not breaking laws, etc. this is when the standard really must come from social and cultural order/ practices, government, and most importantly laws to ensure the standards have the buy in and impact you need. My suggestion to IEEE, please work with gov, tech, legal sys. on this one.


More than 100 experts in artificial intelligence and ethics are attempting to advance public discussion surrounding the ethical considerations of AI.

Read more

But Westworld is more than just entertainment. It raises problems that society will have to face head-on as technology gets more powerful. Here are a couple of the biggest.

1. Can we treat robots with respect?

Westworld raises a moral question — at what point do we have to treat machines in a responsible manner? We’re used to dropping our smartphones on the ground without remorse and throwing our broken gadgets in the trash. We may have to think differently as machines show more human traits.

Read more

Aubrey de Grey and Brian Kennedy debate the motion that “Lifespans are long enough” at Intelligence2. This was a great show and the results speak for themselves as do the convincing arguments presented by Brian and Aubrey. If you missed it first time around earlier this year you should watch it now.


“What if we didn’t have to grow old and die? The average American can expect to live for 78.8 years, an improvement over the days before clean water and vaccines, when life expectancy was closer to 50, but still not long enough for most of us. So researchers around the world have been working on arresting the process of aging through biotechnology and finding cures to diseases like Alzheimer’s and cancer. What are the ethical and social consequences of radically increasing lifespans? Should we accept a “natural” end, or should we find a cure to aging?”

On February 3rd, 2016, SRF’s Chief Science Officer Aubrey de Grey joined forces with Buck Institute for Research on Aging President/CEO Brian Kennedy to oppose the motion that “Lifespans Are Long Enough”, in a debate hosted at New York’s Kaufman Center by Intelligence2 Debates. The team proposing the motion comprised Paul Root Wolpe, Director of the Emory Center for Ethics, and Ian Ground of the UK’s Newcastle University.

The event included pre- and post-debate audience votes. While both sides gained ground relative to the opening numbers, the Againsts secured a solid victory — having won over almost twice as many of the initially-undecided audience members as their opponents.

I argued in my 2015 paper “Why it matters that you realize you’re in a Computer Simulation” that if our universe is indeed a computer simulation, then that particular discovery should be commonplace among the intelligent lifeforms throughout the universe. The simple calculus of it all being (a) if intelligence is in part equivalent to detecting the environment (b) the environment is a computer simulation © eventually nearly all intelligent lifeforms should discover that their environment is a computer simulation. I called this the Savvy Inevitability. In simple terms, if we’re really in a Matrix, we’re supposed to eventually figure that out.

Silicon Valley, tech culture, and most nerds the world over are familiar with the real world version of the question are we living in a Matrix? The paper that’s likely most frequently cited is Nick Bostrom’s Are you living in a Computer Simulation? Whether or not everyone agrees about certain simulation ideas, everyone does seem to have an opinion about them.

Recently, the Internet heated up over Elon Musk’s comments at a Vox event on hot tub musings of the simulation hypothesis. Even Bank of America published an analysis of the simulation hypothesis, and, according to Tad Friend in an October 10, 2016 article published in New Yorker, “two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.”

Read more

Industry leaders in the world of artificial intelligence just announced the Partnership on AI. This exciting new partnership was “established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

The partnership is currently co-chaired by Mustafa Suleyman with DeepMind and Eric Horvitz with Microsoft. Other leaders of the partnership include: FLI’s Science Advisory Board Member Francesca Rossi, who is also a research scientist at IBM; Ralf Herbrich with Amazon; Greg Corrado with Google; and Yann LeCun with Facebook.

Though the initial group members were announced yesterday, the collaboration anticipates increased participation, announcing in their press release that “academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization.”

Read more