AI is a buzzword that gets tossed around often in the business world and in the media, but it is already having tangible effects for a slew of industries — not least those that rely on a significant amount of manual labor.
As AI comes increasingly closer to maturity, and businesses continue to ramp up investments in it, some worry that not enough attention is being paid to the broader social and moral implications of the technology.
CNBC spoke with some experts to see what they think are the five scariest potential future scenarios for AI.
Stefan Lorenz Sorgner, Ph.D. is a German metahumanist philosopher, Nietzsche scholar, philosopher of music, and an authority in the field of ethics of emerging technologies.
In recent years, he taught at the Universities of Jena (Germany), Erfurt (Germany), Klagenfurt (Austria), Ewha Womans University in Seoul (South Korea), and Erlangen-Nürnberg (Germany). His main fields of research are Nietzsche, the philosophy of music, bioethics and metahumanism, posthumanism, and transhumanism.
While the rest of the world debates the ethics of designer babies, a team at the University of Massachusetts Medical School (UMass) have shown that we might not need CRISPR to change the genes of future generations. Their paper, released this week in the journal Developmental Cell, shows that things like diet and stress might affect some crucial genetic components of sperm, and that these tiny changes have real effects on how babies develop.
The same way rockets bound for outer space contain “payloads” like satellites, or astronauts who battle giant balls of urine, sperm are also like little rockets containing their own cargo: “small RNAs.” This study found that not only do RNA sequences play a crucial role in how genes get expressed early on in human development, but they can also be radically changed by the lifestyles of fathers. Things like diet, and in particular, stress can change the makeup of this crucial RNA cargo and lead to observable changes in offspring, says researcher Colin Conine, Ph.D., at UMass Medical School’s Rando Lab.
“Labs all over the world have been able to link changes in dad’s lifestyle to changes in RNA in the sperm, and then that leads to phenotypes in the offspring,” Conine tells Inverse. “Our study was one of the first to really look at how changes small RNAs affect early development. We wanted to ask, what are the first steps that lead to these phenotypes down the road?”
Designer babies are on the horizon after an influential group of scientists concluded that it could be ‘morally permissible’ to genetically engineer human embryos.
In a new report which opens the door to a change in the law, the Nuffield Council on Bioethics, said that DNA editing could become an option for parents wanting to ‘influence the genetic characteristics of their child.’
Although it would be largely used to cure devastating genetic illnesses, or predispositions to cancers and dementia, the experts said they were not ruling out cosmetic uses such as making tweaks to increase height or changing eye or hair colour, if it would make a child more successful.
For millennia, our planet has sustained a robust ecosystem; healing each deforestation, algae bloom, pollution or imbalance caused by natural events. Before the arrival of an industrialized, destructive and dominant global species, it could pretty much deal with anything short of a major meteor impact. In the big picture, even these cataclysmic events haven’t destroyed the environment—they just changed the course of evolution and rearranged the alpha animal.
But with industrialization, the race for personal wealth, nations fighting nations, and modern comforts, we have recognized that our planet is not invincible. This is why Lifeboat Foundation exists. We are all about recognizing the limits to growth and protecting our fragile environment.
Check out this April news article on the US president’s forthcoming appointment of Jim Bridenstine, a vocal climate denier, as head of NASA. NASA is one of the biggest agencies on earth. Despite a lack of training or experience—without literacy in science, technology or astrophysics—he was handed an enormous responsibility, a staff of 17,000 and a budget of $19 billion.
In 2013, Bridenstine criticized former president Obama for wasting taxpayer money on climate research, and claimed that global temperatures stopped rising 15 years ago.
The Vox News headline states “Next NASA administrator is a Republican congressman with no background in science”. It points out that Jim Bridenstine’s confirmation has been controversial — even among members of his own party.
Sometimes, flip-flopping is a good thing
In less than one month, Jim Bridenstine has changed—he has changed a lot!
After less then a month as head of NASA, he is convinced that climate change is real, that human activity is the significant cause and that it presents an existential threat. He has changed from climate denier to a passionate advocate for doing whatever is needed to reverse our impact and protect the environment.
What changed?
Bridenstine acknowledges that he was a denier, but feels that exposure to the evidence and science is overwhelming and convincing—even in with just a few weeks exposure to world class scientists and engineers.
For anyone who still claims that there is no global warming or that the evidence is ‘iffy’, it is worth noting that Bridenstine was a hand-picked goon. His appointment was recommended by right wing conservatives and rubber stamped by the current administration. He was a Denier—but had a sufficiently open mind to listen to experts and review the evidence.
Do you suppose that the US president is listening? Do you suppose that he will grasp the most important issues of this century? What about other world leaders, legislative bodies and rock stars? Will they use their powers or influence to do the right thing? For the sake of our existence, let us hope they follow the lead of Jim Bridenstine, former climate denier!
Dont really care about the competition, but this horse race means AI hitting the 100 IQ level at or before 2029 should probably happen.
The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, Japan, Singapore, China, the UAE, Finland, Denmark, France, the UK, the EU Commission, South Korea, and India have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.
This article summarizes the key policies and goals of each national strategy. It also highlights relevant policies and initiatives that the countries have announced since the release of their initial strategies.
I plan to continuously update this article as new strategies and initiatives are announced. If a country or policy is missing (or if something in the summary is incorrect), please leave a comment and I will update the article as soon as possible.
Interstellar travel one of the most moral projects? “one of the most moral projects might be to prepare for interstellar travel. After all, if the Earth becomes inhabitable—whether in 200 years or in 200,000 years—the only known civilization in the history of the solar system will suddenly go extinct. But if the human species has already spread to other planets, we will escape this permanent eradication, thus saving millions—possibly trillions—of lives that can come into existence after the demise of our first planet.”
The Red Planet is a freezing, faraway, uninhabitable desert. But protecting the human species from the end of life on Earth could save trillions of lives.
In terms of moral, social, and philosophical uprightness, isn’t it striking to have the technology to provide a free education to all the world’s people (i.e. the Internet and cheap computers) and not do it? Isn’t it classist and backward to have the ability to teach the world yet still deny millions of people that opportunity due to location and finances? Isn’t that immoral? Isn’t it patently unjust? Should it not be a universal human goal to enable everyone to learn whatever they want, as much as they want, whenever they want, entirely for free if our technology permits it? These questions become particularly deep if we consider teaching, learning, and education to be sacred enterprises.
When we as a global community confront the truly difficult question of considering what is really worth devoting our limited time and resources to in an era marked by global catastrophe, I always find my mind returning to what the Internet hasn’t really been used for yet — and what was rumored from its inception that it should ultimately provide — an utterly and entirely free education for all the world’s people.
“On the web for free you’ll be able to find the best lectures in the world […] It will be better than any single university […] No matter how you came about your knowledge, you should get credit for it. Whether it’s an MIT degree or if you got everything you know from lectures on the web, there needs to be a way to highlight that.”
The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.
Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in TheNew York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.
Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.