Toggle light / dark theme

Battling bias. If I’ve been a little MIA this week, it was because I spent Monday and Tuesday in Boston for Fortune ’s inaugural Brainstorm A.I. gathering. It was a fun and wonky couple of days diving into artificial intelligence and machine learning, technologies that—for good or ill—seem increasingly likely to shape not just the future of business, but the world at large.

There are a lot of good and hopeful things to be said about A.I. and M.L., but there’s also a very real risk that the technologies will perpetuate biases that already exist, and even introduce new ones. That was the subject of one of the most engrossing discussions of the event by a panel that was—as pointed out by moderator, guest co-chair, and deputy CEO of Smart Eye Rana el Kaliouby—comprised entirely of women.

One of the scariest parts of bias in A.I. is how wide and varied the potential effects can be. Sony Group’s head of A.I. ethics office Alice Xiang gave the example of a self-driving car that’s been trained too narrowly in what it recognizes as a human reason to jam on the breaks. “You need to think about being able to detect pedestrians—and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set,” said Xiang.

Imagine a world in which smart packaging for supermarket-ready meals updates you in real-time to tell you about carbon footprints, gives live warnings on product recalls and instant safety alerts because allergens were detected unexpectedly in the factory.

But how much extra energy would be used powering such a system? And what if an accidental alert meant you were told to throw away your food for no reason?

These are some of the questions asked by team of researchers, including a Lancaster University Lecturer in Design Policy and Futures Thinking, who—by creating objects from a “smart” imaginary new world—are looking at the ethical implications of using artificial intelligence in the food sector.

I think intelligent tool making life is rare but there is plenty of room for those far, far in advance of us. Robert Bradbury, who thought up M-Brains, said he did not think truly hyper advanced entities would bother communicating with us. Being able to process the entire history of human thought in a few millionths of a second puts them further away from us than we are from nematodes. But then that might not be giving them credit for their intelligence and resources, as they might wish to see how well their simulations have done compared to reality.


Foresight Intelligent Cooperation Group.

2021 program & apply to join: https://foresight.org/intelligent-cooperation/

Anders Sandberg, Oxford University.

Game Theory of Cooperating with Extraterrestrial Intelligence and Future Civilizations.

“De-Extinction” Biotechnology & Conservation Biology — Ben Novak, Lead Scientist Revive & Restore


Ben Novak is Lead Scientist, at Revive & Restore (https://reviverestore.org/), a California-based non-profit that works to bring biotechnology to conservation biology with the mission to enhance biodiversity through the genetic rescue of endangered and extinct animals (https://reviverestore.org/what-we-do/ted-talk/).

Ben collaboratively pioneers new tools for genetic rescue and de-extinction, helps shape the genetic rescue efforts of Revive & Restore, and leads its flagship project, The Great Passenger Pigeon Comeback, working with collaborators and partners to restore the ecology of the Passenger Pigeon to the eastern North American forests. Ben uses his training in ecology and ancient-DNA lab work to contribute, hands-on, to the sequencing of the extinct Passenger Pigeon genome and to study important aspects of its natural history (https://www.youtube.com/watch?v=pK2UlLsHkus&t=1s).

Ben’s mission in leading the Great Passenger Pigeon Comeback is to set the standard for de-extinction protocols and considerations in the lab and field. His 2018 review article, “De-extinction,” in the journal Genes, helped to define this new term. More recently, his treatment, “Building Ethical De-Extinction Programs—Considerations of Animal Welfare in Genetic Rescue” was published in December 2019 in The Routledge Handbook of Animal Ethics: 1st Edition.

Ben’s work at Revive & Restore also includes extensive education and outreach, the co-convening of seminal workshops, and helping to develop the Avian and Black-footed Ferret Genetic Rescue programs included in the Revive & Restore Catalyst Science Fund.

Thankfully, there is a growing effort toward AI For Good.

This latest mantra entails ways to try and make sure that the advances in AI are being applied for the overall betterment of mankind. These are assuredly laudable endeavors and reassuringly crucial that the technology underlying AI is aimed and deployed in an appropriate and assuredly positive fashion (for my coverage on the burgeoning realm of AI Ethics, see the link here).

Unfortunately, whether we like it or not, there is the ugly side of the coin too, namely the despicable AI For Bad.

Anders Sandberg, University of Oxford.

One of the deepest realizations of the scientific understanding of the world that emerged in the 18th and 19th century is that the world is changing, that it has been radically different in the past, that it can be radically different in the future, and that such changes could spell the end of humanity as we know it. An added twist arrived in the 20th century: we could ourselves be the cause of our demise. In the late 20th century an interdisciplinary field studying global catastrophic and existential risks emerged, driven by philosophical concern about the moral weight of such risks and the realization that many such risks show important commonalities that may allow us as a species to mitigate them. For example, much of the total harm from nuclear wars, supervolcanic eruptions, meteor impacts and some biological risks comes from global agricultural collapse. This talk is going to be an overview of the world of low-probability, high-impact risks and their overlap with questions of complexity in the systems generating or responding to them. Understanding their complex dynamics may be a way of mitigating them and ensuring a happier future.

Follow us on social media:
https://twitter.com/sfiscience.
https://instagram.com/sfiscience.
https://facebook.com/santafeinstitute.
https://facebook.com/groups/santafeinstitute.
https://linkedin.com/company/santafeinstitute.

https://complexity.simplecast.com.
https://aliencrashsite.org

This post is a collaboration with Dr. Augustine Fou, a seasoned digital marketer, who helps marketers audit their campaigns for ad fraud and provides alternative performance optimization solutions; and Jodi Masters-Gonzales, Research Director at Beacon Trust Network and a doctoral student in Pepperdine University’s Global Leadership and Change program, where her research intersects at data privacy & ethics, public policy, and the digital economy.

The ad industry has gone through a massive transformation since the advent of digital. This is a multi-billion dollar industry that started out as a way for businesses to bring more market visibility to products and services more effectively, while evolving features that would allow advertisers to garner valuable insights about their customers and prospects. Fast-forward 20 years later and the promise of better ad performance and delivery of the right customers, has also created and enabled a rampant environment of massive data sharing, more invasive personal targeting and higher incidences of consumer manipulation than ever before. It has evolved over time, underneath the noses of business and industry, with benefits realized by a relative few. How did we get here? More importantly, can we curb the path of a burgeoning industry to truly protect people’s data rights?

There was a time when advertising inventory was finite. Long before digital, buying impressions was primarily done through offline publications, television and radio. Premium slots commanded higher CPM (cost per thousand) rates to obtain the most coveted consumer attention. The big advertisers with the deepest pockets largely benefitted from this space by commanding the largest reach.

Many people reject scientific expertise and prefer ideology to facts. Lee McIntyre argues that anyone can and should fight back against science deniers.
Watch the Q&A: https://youtu.be/2jTiXCLzMv4
Lee’s book “How to Talk to a Science Denier” is out now: https://geni.us/leemcintyre.

“Climate change is a hoax—and so is coronavirus.” “Vaccines are bad for you.” Many people may believe such statements, but how can scientists and informed citizens convince these ‘science deniers’ that their beliefs are mistaken?

Join Lee McIntyre as he draws on his own experience, including a visit to a Flat Earth convention as well as academic research, to explain the common themes of science denialism.

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and an Instructor in Ethics at Harvard Extension School. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan (Ann Arbor). He has taught philosophy at Colgate University (where he won the Fraternity and Sorority Faculty Award for Excellence in Teaching Philosophy), Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston.

This talk was recorded on 24 August 2021.