Toggle light / dark theme

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

Read more

US army’s report visualises augmented soldiers & killer robots.


The US Army’s recent report “Visualizing the Tactical Ground Battlefield in the Year 2050” describes a number of future war scenarios that raise vexing ethical dilemmas. Among the many tactical developments envisioned by the authors, a group of experts brought together by the US Army Research laboratory, three stand out as both plausible and fraught with moral challenges: augmented humans, directed-energy weapons, and autonomous killer robots. The first two technologies affect humans directly, and therefore present both military and medical ethical challenges. The third development, robots, would replace humans, and thus poses hard questions about implementing the law of war without any attending sense of justice.

Augmented humans. Drugs, brain-machine interfaces, neural prostheses, and genetic engineering are all technologies that may be used in the next few decades to enhance the fighting capability of soldiers, keep them alert, help them survive longer on less food, alleviate pain, and sharpen and strengthen their cognitive and physical capabilities. All raise serious ethical and bioethical difficulties.

Drugs and prosthetics are medical interventions. Their purpose is to save lives, alleviate suffering, or improve quality of life. When used for enhancement, however, they are no longer therapeutic. Soldiers designated for enhancement would not be sick. Rather, commanders would seek to improve a soldier’s war-fighting capabilities while reducing risk to life and limb. This raises several related questions.

Read more

Despite more than a thousand artificial-intelligence researchers signing an open letter this summer in an effort to ban autonomous weapons, Business Insider reports that China and Russia are in the process of creating self-sufficient killer robots, and in turn is putting pressure on the Pentagon to keep up.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” U.S. Deputy Secretary of Defense Robert Work said during a national security forum on Monday.

Work added, “[Gerasimov] said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.’”

Read more

In the various incarnations of Douglas Adams’ Hitchhiker’s Guide To The Galaxy, a sentient robot named Marvin the Paranoid Android serves on the starship Heart of Gold. Because he is never assigned tasks that challenge his massive intellect, Marvin is horribly depressed, always quite bored, and a burden to the humans and aliens around him. But he does write nice lullabies.

While Marvin is a fictional robot, Scholar and Author David Gunkel predicts that sentient robots will soon be a fact of life and that mankind needs to start thinking about how we’ll treat such machines, at present and in the future.

For Gunkel, the question is about moral standing and how we decide if something does or does not have moral standing. As an example, Gunkel notes our children have moral standing, while a rock or our smartphone may not have moral consideration. From there, he said, the question becomes, where and how do we draw the line to decide who is inside and who is outside the moral community?

“Traditionally, the qualities for moral standing are things like rationality, sentience (and) the ability to use languages. Every entity that has these properties generally falls into the community of moral subjects,” Gunkel said. “The problem, over time, is that these properties have changed. They have not been consistent.”

To illustrate, Gunkel cited Greco-Roman times, when land-owning males were allowed to exclude their wives and children from moral consideration and basically treat them as property. As we’ve grown more enlightened in recent times, Gunkel points to the animal rights movement which, he said, has lowered the bar for inclusion in moral standing, based on the questions of “Do they suffer?” and “Can they feel?” The properties that are qualifying properties are no longer as high in the hierarchy as they once were, he said.

While the properties approach has worked well for about 2,000 years, Gunkel noted that it has generated more questions that need to be answered. On the ontological level, those questions include, “How do we know which properties qualify and when do we know when we’ve lowered the bar too low or raised it too high? Which properties count the most?”, and, more importantly, “Who gets to decide?”

“Moral philosophy has been a struggle over (these) questions for 2,000 years and, up to this point, we don’t seem to have gotten it right,” Gunkel said. “We seem to have gotten it wrong more often than we have gotten it right… making exclusions that, later on, are seen as being somehow problematic and dangerous.”

Beyond the ontological issues, Gunkel also notes there are epistemological questions to be addressed as well. If we were to decide on a set of properties and be satisfied with those properties, because those properties generally are internal states, such as consciousness or sentience, they’re not something we can observe directly because they happen inside the cranium or inside the entity, he said. What we have to do is look at external evidence and ask, “How do I know that another entity is a thinking, feeling thing like I assume myself to be?”

To answer that question, Gunkel noted the best we can do is assume or base our judgments on behavior. The question then becomes, if you create a machine that is able to simulate pain, as we’ve been able to do, do you assume the robots can read pain? Citing Daniel Dennett’s essay Why You Can’t Make a Computer That Feels Pain, Gunkel said the reason we can’t build a computer that feels pain isn’t because we can’t engineer a mechanism, it’s because we don’t know what pain is.

“We don’t know how to make pain computable. It’s not because we can’t do it computationally, but because we don’t even know what we’re trying to compute,” he said. “We have assumptions and think we know what it is and experience it, but the actual thing we call ‘pain’ is a conjecture. It’s always a projection we make based on external behaviors. How do we get legitimate understanding of what pain is? We’re still reading signs.”

According to Gunkel, the approaching challenge in our everyday lives is, “How do we decide if they’re worthy of moral consideration?” The answer is crucial, because as we engineer and build these devices, we must still decide what we do with “it” as an entity. This concept was captured in a PBS Idea Channel video, an episode based on this idea and on Gunkel’s book, The Machine Question.

To address that issue, Gunkel said society should consider the ethical outcomes of the artificial intelligence we create at the design stage. Citing the potential of autonomous weapons, the question is not whether or not we should use the weapon, but whether we should even design these things at all.

“After these things are created, what do we do with them, how do we situate them in our world? How do we relate to them once they are in our homes and in our workplace? When the machine is there in your sphere of existence, what do we do in response to it?” Gunkel said. “We don’t have answers to that yet, but I think we need to start asking those questions in an effort to begin thinking about what is the social status and standing of these non-human entities that will be part of our world living with us in various ways.”

As he looks to the future, Gunkel predicts law and policy will have a major effect on how artificial intelligence is regarded in society. Citing decisions stating that corporations are “people,” he noted that the same types of precedents could carry over to designed systems that are autonomous.

“I think the legal aspect of this is really important, because I think we’re making decisions now, well in advance of these kind of machines being in our world, setting a precedent for the receptivity to the legal and moral standing of these other kind of entities,” Gunkel said.

“I don’t think this will just be the purview of a few philosophers who study robotics. Engineers are going to be talking about it. AI scientists have got to be talking about it. Computer scientists have got to be talking about it. It’s got to be a fully interdisciplinary conversation and it’s got to roll out on that kind of scale.”

YourStory-OpenAI

““Our trust in complex systems stems mostly from understanding their predictability, whether it is nuclear reactors, lathe machines, or 18-wheelers; or of course, AI. If complex systems are not open to be used, extended, and learned about, they end up becoming yet another mysterious thing for us, ones that we end up praying to and mythifying. The more open we make AI, the better.””

Read more

It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you.

Image Credit: TED
Image Credit: TED

Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor.

“If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater impacts and derive hard statistical data on that,” he said. “We find that the risk of asteroids is extremely small and likewise for a few of the other risks that arrive from nature. But other really big existential risks are not in any direct way susceptible to this kind of rigorous quantification.”

In Bostrom’s eyes, the most significant risks we face arise from human activity and particularly the potential dangerous technological discoveries that await us in the future. Though he believes there’s no way to quantify the possibility of humanity being destroyed by a super-intelligent machine, a more important variable is human judgment. To improve assessment of existential risk, Bostrom said we should think carefully about how these judgments are produced and whether the biases that affect those judgments can be avoided.

“If your task is to hammer a nail into a board, reality will tell you if you’re doing it right or not. It doesn’t really matter if you’re a Communist or a Nazi or whatever crazy ideologies you have, you’ll learn quite quickly if you’re hammering the nail in wrong,” Bostrom said. “If you’re wrong about what the major threats are to humanity over the next century, there is not a reality click to tell you if you’re right or wrong. Any weak bias you might have might distort your belief.”

Noting that humanity doesn’t really have any policy designed to steer a particular course into the future, Bostrom said many existential risks arise from global coordination failures. While he believes society might one day evolve into a unified global government, the question of when this uniting occurs will hinge on individual contributions.

“Working toward global peace is the best project, just because it’s very difficult to make a big difference there if you’re a single individual or a small organization. Perhaps your resources would be better put to use if they were focused on some problem that is much more neglected, such as the control problem for artificial intelligence,” Bostrom said. “(For example) do the technical research to figure that, if we got the ability to create super intelligence, the outcome would be safe and beneficial. That’s where an extra million dollars in funding or one extra very talented person could make a noticeable difference… far more than doing general research on existential risks.”

Looking to the future, Bostrom feels there is an opportunity to show that we can do serious research to change global awareness of existential risks and bring them into a wider conversation. While that research doesn’t assume the human condition is fixed, there is a growing ecosystem of people who are genuinely trying to figure out how to save the future, he said. As an example of how much influence one can have in reducing existential risk, Bostrom noted that a lot more people in history have believed they were Napoleon, yet there was actually only one Napoleon.

“You don’t have to try to do it yourself… it’s usually more efficient to each do whatever we specialize in. For most people, the most efficient way to contribute to eliminating existential risk would be to identify the most efficient organizations working on this and then support those,” Bostrom said. “The values on the line in terms of how many happy lives could exist in humanity’s future, even a very small probability of impact in that, would probably be worthwhile in pursuing”.

Image Credit: LinkedIn
Image Credit: LinkedIn

As the line between tabloid media and mainstream media becomes more diffuse, news items such as Ebola, pit bulls, Deflategate, and Donald Trump can frequently generate a cocktail of public panic, scrutiny, and scorn before the news cycle moves on to the next sensational headline. According to Robotics Expert and self-proclaimed “Robot Psychiatrist” Dr. Joanne Pransky, the same phenomenon has happened in robotics, which can shape public perception and, by extension, the future development of robots and AI.

“The challenge, since robotics is just starting to come into the mainstream, is that most of the country is ignorant. So, if you believe what you read, then I think people have a very negative and inaccurate picture (of robotics),” Pransky said. “I spend a lot of time bashing negative headlines, such as ‘ROBOT KILLS HUMAN,’ when actually the human killed himself by not following proper safety standards. A lot of things are publicized about robotics, but there’s nothing about the robot in the article. It leads people on the wrong path.”

Hedging the Negative Media

Pransky has spent much of her time trying to present an accurate depiction and provoke thoughtful discussion about robotics that separates fact from science fiction, as elaborated upon in this well-written TechRepublic article. To that end, she’s spent almost her entire career educating the public on the real issues facing robots and robotics. Showing an actual robot in action, she believes, is the best way to educate the public about the potential, and limits, of robotics.

Pransky noted that YouTube videos, such as the Boston Dynamics Big Dog robot videos, are great for presenting robotics in a positive light. Yet a similar video, showing a human kicking the BigDog robot to test its stability, can also present a negative image to the general public — that robots needs to be kicked around because they’re dangerous, unintelligent, or won’t work otherwise.

On the contrary, futuristic feature films such as “Her” and “Ex Machina”, while still presenting darker plots, are a robot psychiatrist’s dream, she added. “What is it like to have a robot live with us? And to (have that robot) be a nanny and a lover and how will it change the whole family dynamic? These things, to me, are not science fiction, they’re inevitable,” Pransky said. “Whether or not it occurs exactly the way you see it in science fiction in our lifetime is a different question. To me, it’s not a question of when… it’s happening.”

A Call for More Logic — and Empathy

The area that Pranksy believes is most misconstrued is the media’s depiction of autonomous weapons in the military. The biggest problem, she said, is that most people now believe there are completely autonomous weapons in use. What the public overlooks is, even if the military did have autonomous robots, it doesn’t necessarily mean those machines would be 100 percent unsupervised by humans in the decision-making. The larger issue, she believes are the related moral issues, which need to be discussed.

“I really believe, when it comes to humans, the most important thing is not the future of intelligence and AI, it is social intelligence and emotional intelligence. If we’re going to be working with entities that technology has merged together, how are we going to get it right with something that’s not 100% biological versus nonbiological?” she said. “I think there should be more emphasis on issues like moral laws and stages of moral development. I think that is very important in any of these discussions.”

On a broader scale, the recent concerns about uncontrolled AI expressed by Stephen Hawking, Elon Musk, and Bill Gates shined a lot of negative light on robotics, Pransky said. She recognizes Musk, Hawking and Gates as some of the top minds in the world on the topics of future AI and robotics, but notes that “they’re not sociologists or psychologists”. Given that, Pransky said the public should take their views about the future of robotics with a grain of salt.

Looking to the future, Pransky sees the need to address the public’s concerns about robotics before the industry has a “Pearl Harbor moment”. She believes that robots are still “out-of-sight, out-of-mind” for much of the general public, and thinks lawmakers need to consider how the robotics industry will develop before the urgent, last-minute need arises.

“Recently, computers stopped United Airlines on the same day it stopped the stock market and we paid attention more. It’s human nature that, unless there is something catastrophic, we don’t respond as well or as quickly, but we don’t have to be so ‘doomsday’ about it,” Pransky said. “Robotic law is a very huge deal. We absolutely need to bring laws and regulation to federal attention.”

Russell also signed the letter, but he says his view is less apocalyptic. He says that, until now, the field of artificial intelligence has been singularly focused on giving robots the ability to make “high-quality” decisions.

“At the moment, we don’t know how to give the robot what you might call human values,” he says.

But Russell believes that as this problem becomes clearer, it’s only natural that people will start to focus their energy on solving it.

Read more

Dr Michael Fossel comments on the recent Bioviva announcement of the first human gene therapy against aging.


The other day, a friend of mine, Liz Parrish, the CEO and founder of BioViva, made quite a splash when she injected herself with a viral vector containing genes for both telomerase and FST. Those in favor of what Liz did applaud her for her courage and her ability to move quickly and effectively in a landscape where red tape and regulatory concerns have – in the minds of some – impeded innovation and medical care. Those opposed to what Liz did have criticized her for moving too rapidly without sufficient concern for safety, ethics, or (from some critics) scientific rationale.

Many people have asked me to comment, both as an individual and as the founder of Telocyte. This occurs for two reasons. For one thing, I was the first person to ever advocate the use of telomerase as a clinical intervention, in discussions, in published journal articles, and in published books. My original JAMA articles (1997 and 1998), my first book on the topic (1996), and my textbook (2004) all clearly explained both the rational of and the implications for using telomerase as a therapeutic intervention to treat age-related disease. For another thing, Liz knew that our biotech firm, Telocyte, intends to do almost the same thing, but with a few crucial differences: we will only be using telomerase (hTERT) and we intend to pursue human trials that have FDA clearance, have full IRB agreement, and meet GMP (“Good Medical Production”) standards.

We cannot help but applaud Liz’s courage in using herself as a subject, a procedure with a long (and occasionally checkered) history in medical science. Using herself as the subject undercuts much of the ethical criticism that would be more pointed if she used other patients. Like many others, we also fully understand the urgent need for more effective therapeutic interventions: patients are not only suffering, but dying as we try to move ahead. In the case of Alzheimer’s disease, for example (our primary therapeutic target at Telocyte), there are NO currently effective therapies, a history of universal failure in human trials for experimental therapies, and an enormous population of patients who are currently losing their souls and their lives to this disease. A slow, measured approach to finding a cure is scarcely welcome in such a context.

Read more