I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.
A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”
The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”
As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.
I am not an astronomer or astrophysicist. I have never worked for NASA or JPL. But, during my graduate year at Cornell University, I was short on cross-discipline credits, and so I signed up for Carl Sagan’s popular introductory course, Astronomy 101. I was also an amateur photographer, occasionally freelancing for local media—and so the photos shown here, are my own.
By the end of the 70’s, Sagan’s star was high and continuing to rise. He was a staple on the Tonight Show with Johnny Carson, producer and host of the PBS TV series, Cosmos, and he had just written Dragons of Eden, which won him a Pulitzer Prize. He also wrote Contact, which became a blockbuster movie, starring Jodie Foster.
Sagan died in 1996, after three bone marrow transplants to compensate for an inability to produce blood cells. Two years earlier, Sagan wrote a book and narrated a film based on a photo taken from space.
Pale Blue Dot is a photograph of Earth taken in February 1990, by Voyager 1 from a distance of 3.7 billion miles (40 times the distance between earth and the sun). At Sagan’s request (and with some risk to the ongoing scientific mission), the space probe was turned around to take this last photo of Earth. In the photo, Earth is less than a pixel in size. Just a tiny dot against the vastness of space, it appears to be suspended in bands of sunlight scattered by the camera lens.
Four years later, Sagan wrote a book and narrated the short film, Pale Blue Dot, based on the landmark 1990 photograph. He makes a compelling case for reconciliation between humans and a commitment to care for our shared environment. In just 3½ minutes, he unites humanity, appealing to everyone with a conscience. [Full text]
—Which brings us to a question: How are we doing? Are we getting along now? Are we treating the planet as a shared life-support system, rather than a dumping ground?
Sagan points out that hate and misunderstanding plays into so many human interactions. He points to a deteriorating environment and that that we cannot escape war and pollution by resettling to another place. Most importantly, he forces us to face the the fragility of our habitat and the need to protect it. He drives home this point by not just explaining it, but by framing it as an urgent choice between life and death.
It has been 22 years since Sagan wrote and produced Pale Blue Dot. What has changed? Change is all around us, and yet not much has changed. To sort it all out, let’s break it down into technology, our survivable timeline and sociology.
Technology & Cosmology
Since Carl Sagan’s death, we have witnessed the first direct evidence of exoplanets. Several hundred have been observed and we will likely find many hundreds more each year. Some of these are in the habitable zone of their star.
Sagan died about 25 years after the last Apollo Moon mission. It is now 45 years since those missions, and humans are still locked into low earth orbits. We have sent a few probes to the distant planets and beyond, but the political will and resources to conduct planetary exploration—or even return to the moon—is weak.
A few private companies are launching humans, satellites or cargo into Space (Space-X, Virgin Galactic, Blue Origin). Dozens of other private ventures have not yet achieved manned flight or an orbital rendezvous, but it seems likey that some projects will succeed. Lift off is becoming commonplace—but almost all of these launches are focused on TV, communications, monitoring our environment or monitoring our enemies. The space program no longer produces the regular breakthroughs and commercial spin-offs that it did throughout the 70s and 80s. continue below photo…
Survivable Timeline
Like most scientists, Carl Sagan was deeply concerned about pollution, nuclear proliferation, loss of bio-diversity, war and global warming. In fact, the debate over global warming was just beginning to heat up in Sagan’s last years. Today, there is no debate over global warming. All credible scientists understand that the earth is choking, and that our activities are contributing to our own demise.
In most regions, air pollution is slightly less of a concern than it was in the 1970s, but ground, water pollution, and radiation contamination are all more evident.
Most alarmingly, we humans are even more pitched in posturing and in killing our neighbors than ever before. We fight over land, religion, water, oil, and human rights. We especially fight in the name of our Gods, in the name of national exceptionalism and in the name of protecting our right to consume disposable luxury gadgets, transient thrills and family vacations—as if we were a prisoner consuming his last meal.
We have an insatiable appetite for raw materials, open spaces, cars and luxury. Yet no one seems to be doing the math. As the vast populations of China and India finally come to the dinner table (2 billion humans), it is clear that they have the wealth to match our gluttony. From where will the land, water, and materials come? And what happens to the environment then? In Beijing, the sky is never blue. Every TV screen is covered in a thick film of dust. On many days, commuters wear filter masks. There is no grass in the parks and no birds in the sky. Something is very wrong. With apologies for a mixed metaphor, the canary is already dead while the jester continues to dance.
Sociology: Man’s Inhumanity to Man
Sagan observed that our leaders are passionate about conquering each other, spilling blood over frequent misunderstandings, giving in to imagined self-importance. None of this has changed.
Regarding our ability to get off of this planet, Sagan said “Visit? Perhaps…Settle? Not yet”. We still do not possess the technology or resources to settle even a single astronaut away from our fragile home planet. We won’t have both the technology and the will to do so for at least 75 years—and then, only a tiny community of scientists or explorers. It falls centuries shy of resettling a population.
Hate, zealotry, intolerance and religious fervor are more toxic than ever before
Today, the earth has a bigger population. Hate and misunderstanding has spread like cancer. Weapons of mass destruction have escaped the restraint of governments, oversight and safety mechanisms. They are now in the hands of intolerant and radical organizations that believe in martyrdom and that lack any desire to coexist within a global community.
Nations, organizations and some individuals possess the technology to kill a million people or more. Without even targeting civilians, a dozen nations can lay waste to the global environment in weeks.
Is it time to revisit Pale Blue Dot? Is it still relevant? The urgency of teaching and heeding Carl Sagan’s words has never been more urgent than now.
Postscript:
Carl Sagan probably didn’t like me. When I was his student, I was a jerk.
Sagan was already a TV personality and author when I took Astronomy 101 in 1977. Occasionally, he discussed material from the pages of his just-released Dragons of Eden, or slipped a photo of himself with Johnny Carson into a slide presentation. He clearly was a star attraction during parent’s weekend before classes started.
Indeed, he often used the phrase “Billions and Billions” even before it led as his trademark. Although he seemed mildly mused that people noticed his annunciation and emphasis, he explained that he thought it was a less distracting alternate to the phrase “That’s billions with a ‘B’ ” when generating appreciation for the vast scope of creation.
At this time that Sagan was my professor, he appeared on the cover of Newsweek magazine. Like a lunkhead, I wrote to Newsweek, claiming that his adulation as a scientist was misplaced and that he was nothing more than an PR huckster for NASA and JPL in the vein of Isaac Asimov. I acknowledged his a gift for popularizing science, but argued that he didn’t have the brains to contribute in any tangible way.
I was wrong, of course. Even in the role of education champion, I failed to appreciate the very powerful and important role that he played in influencing an entire generation of scientists, including, Neil DeGrasse Tyson. Although Newsweek did not publish my letter to the editor, someone on staff sent it to Professor Sagan! When the teaching assistant, a close friend of Sagan, showed me my letter, I was mortified.
Incidentally, I always sat in the front row of the big Uris lecture hall. As a student photographer, I took many photos, which show up on various university web sites from time to time. In the top photo, Professor Sagan is crouching down and clasping hands as he addresses the student seated next to me.
“Online abuse can be cruel – but for some tech companies it is an existential threat. Can giants such as Facebook use behavioural psychology and persuasive design to tame the trolls?”
The bottom line is robots are machines; and like any other machine, a robot system can be (with the right expertise) reprogram. And, a connected robot to the net, etc. poses a risk as long as hackers poses a risk in the current Cyber environment. Again, I encourage government, tech companies, and businesses work collectively together in addressing the immediate challenge around Cyber Security.
And, there will need to be some way to also track robots & deactivate them remotely especially when the public are allowed to buy them (including criminals).
“We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended goal”.
There’s no manual for being a good human, but greeting strangers as you walk by in the morning, saying thank you and opening doors for people are probably among the top things we know we should do, even if we sometimes forget.
“Researchers at the Georgia Institute of Technology say that while there may not be one specific manual, robots might benefit by reading stories and books about successful ways to act in society.”
Again, I see too many gaps that will need to be address before AI can eliminate 70% of today’s jobs. Below, are the top 5 gaps that I have seen so far with AI in taking over many government, business, and corporate positions.
1) Emotion/ Empathy Gap — AI has not been designed with the sophistication to provide personable care such as you see with caregivers, medical specialists, etc. 2) Demographic Gap — until we have a more broader mix of the population engaged in AI’s design & development; AI will not meet the needs for critical mass adoption; only a subset of the population will find will connection in serving most of their needs. 3) Ehtics & Morale Code Gap — AI still cannot understand at a full cognitive level ethics & empathy to a degree that is required. 4) Trust and Compliance Gap — companies need to feel that their IP & privacy is protected; until this is corrected, AI will not be able to replace an entire back office and front office set of operations. 5) Security & Safety Gap — More safeguards are needed around AI to deal with hackers to ensure that information managed by AI is safe as well as ensure public saftey from any AI that becomes disruptive or hijacked to cause injury or worse to the public
Until these gaps are addressed; it will be very hard to eliminate many of today’s government, office/ business positions. The greater job loss will be in the lower skill areas like standard landscaping, some housekeeping, some less personable store clerk, some help desk/ call center operations, and some lite admin admin roles.
The U.S. economy added 2.7 million jobs in 2015, capping the best two-year stretch of employment growth since the late ‘90’s, pushing the unemployment rate down to five percent.
The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?
Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 — 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
The late Supreme Court Justice Potter Stewart once said, “Ethics is knowing the difference between what you have a right to do and what is right to do.”
As artificial intelligence (AI) systems become more and more advanced, can the same statement apply to computers?
According to many technology moguls and policymakers, the answer is this: We’re not quite there yet.
Danaher’s Instruments of Change — If you feel like your industry that has always been on a slow & stable growth curve is now under greater pressure to change; you’re not alone. Recent indicators are showing with the latest changes in tech and consumers (namely the millennials as the largest consumers today); industries have been shaken up to perform at new levels like never before or companies in those industries will cease to be relevant.
Doing well by doing good is now expected for businesses, and moral leadership is at a premium for CEOs. For today’s companies to maintain their license to operate, they need to take into account a range of elements in their decision making: managing their supply chains, applying new ways of measuring their business performance that include indicators for social as well as commercial returns, and controlling the full life cycle of their products’ usage as well as disposal. This new reality is demonstrated by the launch last September of the Sustainable Development Goals (SDGs), which call on businesses to address sustainability challenges such as poverty, gender equality, and climate change in new and creative ways. The new expectations for business also are at the heart of the Change the World list, launched by Fortune Magazine in August 2015, which is designed to identify and celebrate companies that have made significant progress in addressing major social problems as a part of their core business strategy.
Technology and millennials seem to be driving much of this change. Socially conscious customers and idealistic employees are applauding companies’ ability to do good as part of their profit-making strategy. With social media capable of reaching millions instantly, companies want to be on the right side of capitalism’s power. This is good news for society. Corporate venturing activities are emerging, and companies are increasingly leveraging people, ideas, technology, and business assets to achieve social and environmental priorities together with financial profit. These new venturing strategies are focusing more and more on areas where new partnerships and investments can lead to positive outcomes for all: the shareholders, the workers, the environment, and the local community.
Furthermore, this is especially true in the technology sector. More than 25% of the Change the World companies listed by Fortune are tech companies, and four are in the top ten–Vodafone, Google, Cisco Systems, and Facebook. Facebook’s billionaire co-founder and CEO, Mark Zuckerberg, and his wife have helped propel the technology sector into the spotlight as a shining beacon of how to do good and do well. Zuckerberg and Priscilla Chan pledged on December 1, 2015, to give 99 percent of their fortune to charity. Facebook shares are valued between $40 and $45 billion, which makes this a very large gift. The donations will initially be focused on personalized learning, curing disease, connecting people, and building strong communities.
Davos: The True Fear Around Robots — Autonomous weapons, which are currently being developed by the US, UK, China, Israel, South Korea and Russia, will be capable of identifying targets, adjusting their behavior in response to that target, and ultimately firing — all without human intervention.
The issue of ‘killer robots’ one day posing a threat to humans has been discussed at the annual World Economic Forum meeting in Davos, Switzerland.
The discussion took place on 21 January during a panel organised by the Campaign to Stop Killer Robots (CSKR) and Time magazine, which asked the question: “What if robots go to war?”
Participants in the discussion included former UN disarmament chief Angela Kane, BAE Systems chair Sir Roger Carr, artificial intelligence (AI) expert Stuart Russell and robot ethics expert Alan Winfield.