How will future AI systems make the most ethical choices for all of us?
Artificial intelligence is already making decisions in the fields of business, health care, and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call.
What would happen if AI systems had to make independent decisions and ones that could mean life or death for humans?
Unlike humans, robots lack a moral conscience and follow the “ethics” programmed into them. At the same time, human morality is highly variable. The “right” thing to do in any situation will depend on who you ask.
What does it mean when someone calls you smart or intelligent? According to developmental psychologist Howard Gardner, it could mean one of eight things. In this video interview, Dr. Gardner addresses his eight classifications for intelligence: writing, mathematics, music, spatial, kinesthetic, interpersonal, and intrapersonal.
HOWARD GARDNER: Howard Gardner is a developmental psychologist and the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education. He holds positions as Adjunct Professor of Psychology at Harvard University and Senior Director of Harvard Project Zero. Among numerous honors, Gardner received a MacArthur Prize Fellowship in 1981. In 1990, he was the first American to receive the University of Louisville’s Grawemeyer Award in Education and in 2000 he received a Fellowship from the John S. Guggenheim Memorial Foundation. In 2005 and again in 2008 he was selected by Foreign Policy and Prospect magazines as one of 100 most influential public intellectuals in the world. He has received honorary degrees from twenty-two colleges and universities, including institutions in Ireland, Italy, Israel, and Chile. The author of over twenty books translated into twenty-seven languages, and several hundred articles, Gardner is best known in educational circles for his theory of multiple intelligences, a critique of the notion that there exists but a single human intelligence that can be assessed by standard psychometric instruments. During the past twenty five years, he and colleagues at Project Zero have been working on the design of performance-based assessments, education for understanding, and the use of multiple intelligences to achieve more personalized curriculum, instruction, and assessment. In the middle 1990s, Gardner and his colleagues launched The GoodWork Project. “GoodWork” is work that is excellent in quality, personally engaging, and exhibits a sense of responsibility with respect to implications and applications. Researchers have examined how individuals who wish to carry out good work succeed in doing so during a time when conditions are changing very quickly, market forces are very powerful, and our sense of time and space is being radically altered by technologies, such as the web. Gardner and colleagues have also studied curricula. Gardner’s books have been translated into twenty-seven languages. Among his books are The Disciplined Mind: Beyond Facts and Standardized Tests, The K-12 Education that Every Child Deserves (Penguin Putnam, 2000) Intelligence Reframed (Basic Books, 2000), Good Work: When Excellence and Ethics Meet (Basic Books, 2001), Changing Minds: The Art and Science of Changing Our Own and Other People’s Minds (Harvard Business School Press, 2004), and Making Good: How Young People Cope with Moral Dilemmas at Work (Harvard University Press, 2004; with Wendy Fischman, Becca Solomon, and Deborah Greenspan). These books are available through the Project Zero eBookstore. Currently Gardner continues to direct the GoodWork project, which is concentrating on issues of ethics with secondary and college students. In addition, he co-directs the GoodPlay and Trust projects; a major current interest is the way in which ethics are being affected by the new digital media. In 2006 Gardner published Multiple Intelligences: New Horizons, The Development and Education of the Mind, and Howard Gardner Under Fire. In Howard Gardner Under Fire, Gardner’s work is examined critically; the book includes a lengthy autobiography and a complete biography. In the spring of 2007, Five Minds for the Future was published by Harvard Business School Press. Responsibility at Work, which Gardner edited, was published in the summer of 2007.
TRANSCRIPT: Howard Gardner: Currently I think there are eight intelligences that I’m very confident about and a few more that I’ve been thinking about. I’ll share that with our audience. The first two intelligences are the ones which IQ tests and other kind of standardized tests valorize and as long as we know there are only two out of eight, it’s perfectly fine to look at them. Linguistic intelligence is how well you’re able to use language. It’s a kind of skill that poets have, other kinds of writers; journalists tend to have linguistic intelligence, orators. The second intelligence is logical mathematical intelligence. As the name implies logicians, mathematicians…Read the full transcript at https://bigthink.com/videos/howard-gardner-on-the-eight-intelligences
In this video I explain why free will is incompatible with the currently known laws of nature and why the idea makes no sense anyway. However, you don’t need free will to act responsibly and to live a happy life, and I will tell you why.
For example, scientists recently treated a patient’s severe depression with a neural implant that zaps her brain 300 times per day and, she says, has allowed her to spontaneously laugh and feel joy for the first time in years. Of course, the treatment requires an electrode implanted deep into the brain, which currently reserves it for the most extreme medical cases — but as brain interface tech inexorably becomes more advanced and widely available, there’s no reason such a device couldn’t become a consumer gadget as well.
At the research’s current rate of trajectory, experts told Futurism, the tech could conceivably hit the market in just a few years. But what we don’t know is what it will mean for us, psychologically as individuals and sociologically as a society, when we can experience genuine pleasure from the push of a button. And all those questions become even more complex, of course, when applied to the messy world of sex.
“A big question that remains unanswered is whether sextech will ultimately become a complement to our sex lives or a substitute,” Kinsey Institute research fellow Justin Lehmiller, an expert on sex and psychology, told Futurism.
Battling bias. If I’ve been a little MIA this week, it was because I spent Monday and Tuesday in Boston for Fortune ’s inaugural Brainstorm A.I. gathering. It was a fun and wonky couple of days diving into artificial intelligence and machine learning, technologies that—for good or ill—seem increasingly likely to shape not just the future of business, but the world at large.
There are a lot of good and hopeful things to be said about A.I. and M.L., but there’s also a very real risk that the technologies will perpetuate biases that already exist, and even introduce new ones. That was the subject of one of the most engrossing discussions of the event by a panel that was—as pointed out by moderator, guest co-chair, and deputy CEO of Smart Eye Rana el Kaliouby—comprised entirely of women.
One of the scariest parts of bias in A.I. is how wide and varied the potential effects can be. Sony Group’s head of A.I. ethics office Alice Xiang gave the example of a self-driving car that’s been trained too narrowly in what it recognizes as a human reason to jam on the breaks. “You need to think about being able to detect pedestrians—and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set,” said Xiang.
Imagine a world in which smart packaging for supermarket-ready meals updates you in real-time to tell you about carbon footprints, gives live warnings on product recalls and instant safety alerts because allergens were detected unexpectedly in the factory.
But how much extra energy would be used powering such a system? And what if an accidental alert meant you were told to throw away your food for no reason?
These are some of the questions asked by team of researchers, including a Lancaster University Lecturer in Design Policy and Futures Thinking, who—by creating objects from a “smart” imaginary new world—are looking at the ethical implications of using artificial intelligence in the food sector.
I think intelligent tool making life is rare but there is plenty of room for those far, far in advance of us. Robert Bradbury, who thought up M-Brains, said he did not think truly hyper advanced entities would bother communicating with us. Being able to process the entire history of human thought in a few millionths of a second puts them further away from us than we are from nematodes. But then that might not be giving them credit for their intelligence and resources, as they might wish to see how well their simulations have done compared to reality.
“De-Extinction” Biotechnology & Conservation Biology — Ben Novak, Lead Scientist Revive & Restore
Ben Novak is Lead Scientist, at Revive & Restore (https://reviverestore.org/), a California-based non-profit that works to bring biotechnology to conservation biology with the mission to enhance biodiversity through the genetic rescue of endangered and extinct animals (https://reviverestore.org/what-we-do/ted-talk/).
Ben collaboratively pioneers new tools for genetic rescue and de-extinction, helps shape the genetic rescue efforts of Revive & Restore, and leads its flagship project, The Great Passenger Pigeon Comeback, working with collaborators and partners to restore the ecology of the Passenger Pigeon to the eastern North American forests. Ben uses his training in ecology and ancient-DNA lab work to contribute, hands-on, to the sequencing of the extinct Passenger Pigeon genome and to study important aspects of its natural history (https://www.youtube.com/watch?v=pK2UlLsHkus&t=1s).
Thankfully, there is a growing effort toward AI For Good.
This latest mantra entails ways to try and make sure that the advances in AI are being applied for the overall betterment of mankind. These are assuredly laudable endeavors and reassuringly crucial that the technology underlying AI is aimed and deployed in an appropriate and assuredly positive fashion (for my coverage on the burgeoning realm of AI Ethics, see the link here).
Unfortunately, whether we like it or not, there is the ugly side of the coin too, namely the despicable AI For Bad.
One of the deepest realizations of the scientific understanding of the world that emerged in the 18th and 19th century is that the world is changing, that it has been radically different in the past, that it can be radically different in the future, and that such changes could spell the end of humanity as we know it. An added twist arrived in the 20th century: we could ourselves be the cause of our demise. In the late 20th century an interdisciplinary field studying global catastrophic and existential risks emerged, driven by philosophical concern about the moral weight of such risks and the realization that many such risks show important commonalities that may allow us as a species to mitigate them. For example, much of the total harm from nuclear wars, supervolcanic eruptions, meteor impacts and some biological risks comes from global agricultural collapse. This talk is going to be an overview of the world of low-probability, high-impact risks and their overlap with questions of complexity in the systems generating or responding to them. Understanding their complex dynamics may be a way of mitigating them and ensuring a happier future.