Archive for the ‘ethics’ category: Page 60
Apr 2, 2016
There Are Some Super Shady Things in Oculus Rift’s Terms of Service
Posted by Sean Brazell in categories: ethics, virtual reality
This is NOT the way to encourage people to use this device, nor develop anything for it at all. Shame on them!
“By submitting User Content through the Services, you grant Oculus a worldwide, irrevocable, perpetual (i.e. lasting forever), non-exclusive, transferable, royalty-free and fully sublicensable (i.e. we can grant this right to others) right to use, copy, display, store, adapt, publicly perform and distribute such User Content in connection with the Services. You irrevocably consent to any and all acts or omissions by us or persons authorized by us that may infringe any moral right (or analogous right) in your User Content.”
The Oculus Rift is starting to ship, and we’re pretty happy with it. While it’s cool, like any interesting gadget, it’s worth looking through the Terms of Service, because there are some worrisome things included.
Continue reading “There Are Some Super Shady Things in Oculus Rift’s Terms of Service” »
Mar 5, 2016
As Technology Barrels Ahead—Will Ethics Get Left in the Dust?
Posted by Karen Hurst in categories: bioengineering, biological, drones, encryption, ethics, finance, robotics/AI, security
Interesting Question to ask.
The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create new legislation. That’s because there really aren’t any existing laws which encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed, and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. And because there is no consensus on what is right or wrong, any decision they make today will likely be changed tomorrow.
This is a prelude of things to come, not only with encryption technologies, but everything from artificial intelligence to drones, robotics, and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just the lawmakers who are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but as they learn more, they too will change their views.
Continue reading “As Technology Barrels Ahead—Will Ethics Get Left in the Dust?” »
Mar 2, 2016
Never Say Die – SELF/LESS from Science-Fiction to –Fact
Posted by Shailesh Prasad in categories: biotech/medical, cyborgs, ethics, health, life extension, neuroscience, robotics/AI, transhumanism
In SELF/LESS, a dying old man (Academy Award winner Ben Kingsley) transfers his consciousness to the body of a healthy young man (Ryan Reynolds). If you’re into immortality, that’s pretty good product packaging, no?
But this thought-provoking psychological thriller also raises fundamental and felicitous ethical questions about extending life beyond its natural boundaries. Postulating the moral and ethical issues that surround mortality have long been defining characteristics of many notable stories within the sci-fi genre. In fact, the Mary Shelley’s age-old novel, Frankenstein, while having little to no direct plot overlaps [with SELF/LESS], it is considered by many to be among the first examples of the science fiction genre.
Continue reading “Never Say Die – SELF/LESS from Science-Fiction to -Fact” »
Mar 1, 2016
Autonomous Killing Machines Are More Dangerous Than We Think
Posted by Karen Hurst in categories: cybercrime/malcode, drones, ethics, law, military, policy, robotics/AI
I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.
A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”
The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”
Continue reading “Autonomous Killing Machines Are More Dangerous Than We Think” »
Feb 24, 2016
What has changed since “Pale Blue Dot”?
Posted by Philip Raymond in categories: astronomy, cosmology, environmental, ethics, habitats, lifeboat, science, space, space travel, sustainability
I am not an astronomer or astrophysicist. I have never worked for NASA or JPL. But, during my graduate year at Cornell University, I was short on cross-discipline credits, and so I signed up for Carl Sagan’s popular introductory course, Astronomy 101. I was also an amateur photographer, occasionally freelancing for local media—and so the photos shown here, are my own.
By the end of the 70’s, Sagan’s star was high and continuing to rise. He was a staple on the Tonight Show with Johnny Carson, producer and host of the PBS TV series, Cosmos, and he had just written Dragons of Eden, which won him a Pulitzer Prize. He also wrote Contact, which became a blockbuster movie, starring Jodie Foster.
Sagan died in 1996, after three bone marrow transplants to compensate for an inability to produce blood cells. Two years earlier, Sagan wrote a book and narrated a film based on a photo taken from space.
Continue reading “What has changed since ‘Pale Blue Dot’?” »
Feb 23, 2016
Play nice! How the internet is trying to design out toxic behavior — By Gaby Hinsliff | The Guardian
Posted by Odette Bohr Dienel in categories: big data, computing, education, ethics, information science, internet
“Online abuse can be cruel – but for some tech companies it is an existential threat. Can giants such as Facebook use behavioural psychology and persuasive design to tame the trolls?”
Feb 17, 2016
Researchers are Using Fairy Tales to Prevent a ‘Psychotic’ Robot Uprising
Posted by Karen Hurst in categories: business, cybercrime/malcode, ethics, robotics/AI, security
The bottom line is robots are machines; and like any other machine, a robot system can be (with the right expertise) reprogram. And, a connected robot to the net, etc. poses a risk as long as hackers poses a risk in the current Cyber environment. Again, I encourage government, tech companies, and businesses work collectively together in addressing the immediate challenge around Cyber Security.
And, there will need to be some way to also track robots & deactivate them remotely especially when the public are allowed to buy them (including criminals).
“We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended goal”.
Continue reading “Researchers are Using Fairy Tales to Prevent a ‘Psychotic’ Robot Uprising” »
Feb 16, 2016
Bedtime stories for robots could teach them to be human — By Sharon Gaudin | Computerworld
Posted by Odette Bohr Dienel in categories: education, ethics, media & arts, robotics/AI
“Researchers at the Georgia Institute of Technology say that while there may not be one specific manual, robots might benefit by reading stories and books about successful ways to act in society.”
Feb 12, 2016
Yes, robots will steal our jobs — but don’t worry, we’ll get new ones
Posted by Karen Hurst in categories: biotech/medical, business, economics, employment, ethics, neuroscience, robotics/AI, security
Again, I see too many gaps that will need to be address before AI can eliminate 70% of today’s jobs. Below, are the top 5 gaps that I have seen so far with AI in taking over many government, business, and corporate positions.
1) Emotion/ Empathy Gap — AI has not been designed with the sophistication to provide personable care such as you see with caregivers, medical specialists, etc.
2) Demographic Gap — until we have a more broader mix of the population engaged in AI’s design & development; AI will not meet the needs for critical mass adoption; only a subset of the population will find will connection in serving most of their needs.
3) Ehtics & Morale Code Gap — AI still cannot understand at a full cognitive level ethics & empathy to a degree that is required.
4) Trust and Compliance Gap — companies need to feel that their IP & privacy is protected; until this is corrected, AI will not be able to replace an entire back office and front office set of operations.
5) Security & Safety Gap — More safeguards are needed around AI to deal with hackers to ensure that information managed by AI is safe as well as ensure public saftey from any AI that becomes disruptive or hijacked to cause injury or worse to the public
Until these gaps are addressed; it will be very hard to eliminate many of today’s government, office/ business positions. The greater job loss will be in the lower skill areas like standard landscaping, some housekeeping, some less personable store clerk, some help desk/ call center operations, and some lite admin admin roles.
Continue reading “Yes, robots will steal our jobs — but don’t worry, we’ll get new ones” »