Toggle light / dark theme

I am for Ethical Robots what about you?


Every time we talk to Alexa, Siri, Google, or Cortana, we are building the brains of the robot. Human machine relationships increase and robot ethics are needed for the coming age of automation, they are simply not adequate for the nuanced capabilities and behaviors we are beginning to see in today’s devices.

According to Blue Origin, the launch is now set for Friday, Sept. 25 at 10 a.m. central time. You can watch the launch live here.

Boeing to Face Independent Ethics Probe Over Lunar Lander Bid

According to a press release, the New Shepard will fly 12 commercial payloads to space and back, including a demonstration of a lunar landing sensor that will test technologies for future missions to the Moon in support of NASA’s Artemis program.

OpenAI will still let researchers use the model.


Microsoft has expanded its ongoing partnership with San Francisco-based artificial intelligence research company OpenAI with a new exclusive license on the AI firm’s groundbreaking GPT-3 language model, an auto-generating text program that’s emerged as the most sophisticated of its kind in the industry.

The two companies are already entwined through OpenAI’s ongoing Azure cloud computing contract, with Azure being the platform on which OpenAI accesses the vast computing resources it needs to train many of its models, and a major $1 billion investment Microsoft made last year to become OpenAI’s exclusive cloud provider. Now, Microsoft is issuing yet another signal of high confidence in OpenAI’s research by acquiring the rights to GPT-3.

OpenAI released GPT-3, the third iteration of its ever-growing language model, in July, and the program and its prior iterations have helped create some of the most fascinating AI language experiments to date. It’s also inspired vigorous debate around the ethics of powerful AI programs that may be used for more nefarious purposes, with OpenAI initially refusing to publish research about the model for fear it would be misused.

Medical Ethics and “Futility” (Note: Listen here function)


We breathe about 12 to 20 times a minute, without having to think. Inhale: and air flows through the mouth and nose, into the trachea. The bronchi stem out like a wishbone, and keep branching, dividing and dividing, and finally feeding out into the tiny air sacs of alveoli. Capillaries – blood vessels thinner than hairs – twine around each alveolus. Both the air sac and the blood vessel are tiny, delicate, one cell thick: portals where blood (the atmosphere of the body) meets air (atmosphere of the world). Oxygen passes from air to blood; carbon dioxide, from blood to air. Then, the exhale pushes that carbon dioxide back out the mouth and nose. Capillaries channel newly oxygenated blood back to the heart. That oxygen fuels the body. That’s why we breathe.

Today, these basics of human respiration and metabolism feel obvious – and ventilators, the machines that breathe for sick people, do, too. We have so many medical devices, so of course we’d need, and have, machines that help us to breathe. But there’s a strange, and deeply human, story behind how we learned to breathe for each other. It starts long ago, when we didn’t understand breathing at all. When the body’s failure to breathe was incomprehensible, incurable, and fatal. When we had no way of knowing how badly we needed ventilators to keep people alive through those moments of vulnerability, lest those moments be their last.

Medical TV shows have accustomed us to the sight of doctors moving quickly to keep the sickest patients alive – but that link between hurry and success hasn’t always existed. Up to 100-odd years ago, for most of human history, when doctors had a dying patient, they rushed to do what they knew, but the patient died anyway. It doesn’t matter if you hurry or move slowly if your ‘cures’ don’t work. Ventilation, the linchpin of critical care medicine, changed that. Doctors could save some of the dying. That new technology helped bring medicine from hopes and crossed fingers to saving lives.

Discussing STEM, the future, and transhumanism with an islamic scholar / scientist.


Ira Pastor, ideaXme life sciences ambassador interviews Imam Sheikh Dr. Usama Hasan, PhD, MSc, MA, Fellow of the Royal Astronomical Society and Research Consultant at the Tony Blair Institute For Global Change.

Ira Pastor comments:

A pioneer in Emotion AI, Rana el Kaliouby, Ph.D., is on a mission to humanize technology before it dehumanizes us.

At LiveWorx 2020, Rana joined us to share insights from years of research and collaboration with MIT’s Advanced Vehicle Technology group.

Part demo and part presentation, Rana breaks down the facial patterns that cameras can pick up from a tired or rested driver, and observations from the first ever large-scale study looking at driver behavior over time.

Learn how these inferences can be used to change the driving experience ➡️ https://archive.liveworx.com/sessions/artificial-emotional-i…it-matters


With moral purity inserted as a component to the internal processes for all academic publications, it will henceforth become impossible to pursue the vital schema of conjecture and refutation.


Shocked that one of their own could express a heterodox opinion on the value of de rigueur equity, diversity and inclusion policies, chemistry professors around the world immediately demanded the paper be retracted. Mob justice was swift. In an open letter to “our community” days after publication, the publisher of Angewandte Chemie announced it had suspended the two senior editors who handled the article, and permanently removed from its list of experts the two peer reviewers involved. The article was also expunged from its website. The publisher then pledged to assemble a “diverse group of external advisers” to thoroughly root out “the potential for discrimination and foster diversity at all levels” of the journal.

Not to be outdone, Brock’s provost also disowned Hudlicky in a press statement, calling his views “utterly at odds with the values” of the university; the school then drew attention to its own efforts to purge unconscious bias from its ranks and to further the goals of “accessibility, reconciliation and decolonization.” (None of which have anything to do with synthetic organic chemistry, by the way.) Brock’s knee-jerk criticism of Hudlicky is now also under review, following a formal complaint by another professor that the provost’s statement violates the school’s commitment to freedom of expression.

Hudlicky — who told Retraction Watch “the witch hunt is on” — clearly had the misfortune to make a few cranky comments at a time when putting heads on pikes is all the rage. But what of the implications his situation entails for the entirety of the peer-review process? Given the scorched earth treatment handed out to the editors and peer reviewers involved at Angewandte Chemie, the new marching orders for academic journals seem perfectly clear — peer reviewers are now expected to vet articles not just for coherence and relevance to the scientific field in question, but also for alignment with whatever political views may currently hold sway with the community-at-large. If a publication-worthy paper comes across your desk that questions or undermines orthodox public opinion in any way — even in a footnote — and you approve it, your job may be forfeit. Conform or disappear.

Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities. Twitter has been abuzz about its power and potential. The New York Times published an op-ed about it. Later this year, OpenAI will begin charging companies for access to GPT-3, hoping that its system can soon power a wide variety of AI products and services.


Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the world’s largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles.

And we were not alone. 2020 has seen the emergence of a new wave of ethical AI – one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.

Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol, promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the world’s duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year – Brexit, and Trump’s election.