Here’s a very in depth, approximately 6,000 word story (broken into 5 articles) with many pictures on transhumanism–from one of the world’s leading tech sites: The Verge.
Zoltan Istvan is very worried about the superintelligent machines.
He says the war over artificial intelligence will be worse than the Cold War nuclear arms race — much worse. It will be far more deadly, and whoever wins will control the world, eternally. This artificial intelligence might be being developed right now, he says, for all we know. At a government facility in the middle of the Arizona desert, perhaps.
But this is not what Zoltan Istvan is most afraid of, and neither is it the weirdest thing he will tell me in our week together driving 400 miles across the Southwest in his 40-year-old “Immortality Bus,” crudely fashioned into the shape of a coffin. As the presidential candidate of the Transhumanist Party, Zoltan Istvan is dedicated to spreading transhumanism’s gospel, like some modern day Ken Kesey who doesn’t even need acid for his trip.
Researchers at Carnegie Mellon University are building an AI platform that will “whisper” instructions in your ear to provide cognitive assistance. Named after Gabriel, the biblical messenger of God, the whispering robo-assistant can already guide you through the process of building a basic Lego object. But, the ultimate goal is to provide wearable cognitive assistance to millions of people who live with Alzheimer’s, brain injuries or other neurodegenerative conditions. For instance, if a patient forgets the name of a relative, Gabriel could whisper the name in their ear. It could also be programmed to help patients through everyday tasks that will decrease their dependence on caregivers.
For the software to exist as a working wearable assistant, it will need a head-mounted device to latch onto. For now, the team is using Google Glass for demos like a ping pong assistant, where the programs tells the user to hit the ball to the right or left depending on the position of the ball in relation to the opponent. In the video below, when the user follows the guidance it makes it harder for the opponent to defend the ball in the game.
My Thanksgiving was one filled with texting, Snapchat, Skyping, Facetime, Beam robots and ringing phones.
Some people HATE the way technology impacts their family at gatherings — people on digital devices rather than having conversations — “making us more alone, even when we’re together.” But is that really true?
As the line between tabloid media and mainstream media becomes more diffuse, news items such as Ebola, pit bulls, Deflategate, and Donald Trump can frequently generate a cocktail of public panic, scrutiny, and scorn before the news cycle moves on to the next sensational headline. According to Robotics Expert and self-proclaimed “Robot Psychiatrist” Dr. Joanne Pransky, the same phenomenon has happened in robotics, which can shape public perception and, by extension, the future development of robots and AI.
“The challenge, since robotics is just starting to come into the mainstream, is that most of the country is ignorant. So, if you believe what you read, then I think people have a very negative and inaccurate picture (of robotics),” Pransky said. “I spend a lot of time bashing negative headlines, such as ‘ROBOT KILLS HUMAN,’ when actually the human killed himself by not following proper safety standards. A lot of things are publicized about robotics, but there’s nothing about the robot in the article. It leads people on the wrong path.”
Hedging the Negative Media
Pransky has spent much of her time trying to present an accurate depiction and provoke thoughtful discussion about robotics that separates fact from science fiction, as elaborated upon in this well-written TechRepublic article. To that end, she’s spent almost her entire career educating the public on the real issues facing robots and robotics. Showing an actual robot in action, she believes, is the best way to educate the public about the potential, and limits, of robotics.
Pransky noted that YouTube videos, such as the Boston Dynamics Big Dog robot videos, are great for presenting robotics in a positive light. Yet a similar video, showing a human kicking the BigDog robot to test its stability, can also present a negative image to the general public — that robots needs to be kicked around because they’re dangerous, unintelligent, or won’t work otherwise.
On the contrary, futuristic feature films such as “Her” and “Ex Machina”, while still presenting darker plots, are a robot psychiatrist’s dream, she added. “What is it like to have a robot live with us? And to (have that robot) be a nanny and a lover and how will it change the whole family dynamic? These things, to me, are not science fiction, they’re inevitable,” Pransky said. “Whether or not it occurs exactly the way you see it in science fiction in our lifetime is a different question. To me, it’s not a question of when… it’s happening.”
A Call for More Logic — and Empathy
The area that Pranksy believes is most misconstrued is the media’s depiction of autonomous weapons in the military. The biggest problem, she said, is that most people now believe there are completely autonomous weapons in use. What the public overlooks is, even if the military did have autonomous robots, it doesn’t necessarily mean those machines would be 100 percent unsupervised by humans in the decision-making. The larger issue, she believes are the related moral issues, which need to be discussed.
“I really believe, when it comes to humans, the most important thing is not the future of intelligence and AI, it is social intelligence and emotional intelligence. If we’re going to be working with entities that technology has merged together, how are we going to get it right with something that’s not 100% biological versus nonbiological?” she said. “I think there should be more emphasis on issues like moral laws and stages of moral development. I think that is very important in any of these discussions.”
On a broader scale, the recent concerns about uncontrolled AI expressed by Stephen Hawking, Elon Musk, and Bill Gates shined a lot of negative light on robotics, Pransky said. She recognizes Musk, Hawking and Gates as some of the top minds in the world on the topics of future AI and robotics, but notes that “they’re not sociologists or psychologists”. Given that, Pransky said the public should take their views about the future of robotics with a grain of salt.
Looking to the future, Pransky sees the need to address the public’s concerns about robotics before the industry has a “Pearl Harbor moment”. She believes that robots are still “out-of-sight, out-of-mind” for much of the general public, and thinks lawmakers need to consider how the robotics industry will develop before the urgent, last-minute need arises.
“Recently, computers stopped United Airlines on the same day it stopped the stock market and we paid attention more. It’s human nature that, unless there is something catastrophic, we don’t respond as well or as quickly, but we don’t have to be so ‘doomsday’ about it,” Pransky said. “Robotic law is a very huge deal. We absolutely need to bring laws and regulation to federal attention.”
The results are mixed, of course, but it’s fascinating to watch the neural network make mistakes (and sometimes correct itself) in real time. The open source program being used is called NeuralTalk and was first unveiled last year, with the researchers providing updates on the network’s capabilities since. Other companies and institutions are working on similar technology. Last month, for example, Facebook unveiled a prototype neural network that’s intended to help blind people by describing pictures.
London, UK, November, 19, 2015 (PRWEB UK) 19 November 2015.
What matters in beauty is perception. Perception is how you and other people see you, and this perception is almost always biased. Still, healthy people look more attractive despite their age and nationality.
This has enabled the team of biogerontologists and data scientists, who believe that in the near future machines will be able to get a lot of vital medical information about people’s health by just processing their photos, to develop a set of algorithms that can accurately evaluate the criteria linked to perception of human beauty and health where it is most important – the human face. But evaluating beauty and health is not enough. The team’s challenge is to find effective ways to slow down ageing and help people look healthy and beautiful.