Tech group says it is committed to privacy despite demands from Beijing for data access.
Category: ethics – Page 45
The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and whether to develop technology that can ultimately be used for military and surveillance purposes.
Companies criticised for overruling and even dissolving ethics boards.
Many researchers see the move to relax the rules as a welcome change, yet some are worried the revisions don’t take public concerns enough into consideration.
Environmentalism and climate change are increasingly being pushed on us everywhere, and I wanted to write the transhumanism and life extension counter argument on why I prefer new technology over nature and sustainability. Here’s my new article:
On a warming planet bearing scars of significant environmental destruction, you’d think one of the 21st Century’s most notable emerging social groups—transhumanists—would be concerned. Many are not. Transhumanists first and foremost want to live indefinitely, and they are outraged at the fact their bodies age and are destined to die. They blame their biological nature, and dream of a day when DNA is replaced with silicon and data.
Their enmity of biology goes further than just their bodies. They see Mother Earth as a hostile space where every living creature—be it a tree, insect, mammal, or virus—is out for itself. Everything is part of the food chain, and subject to natural law: consumption by violent murder in the preponderance of cases. Life is vicious. It makes me think of pet dogs and cats, and how it’s reported they sometimes start eating their owner after they’ve died.
Many transhumanists want to change all this. They want to rid their worlds of biology. They favor concrete, steel, and code. Where once biological evolution was necessary to create primates and then modern humans, conscious and directed evolution has replaced it. Planet Earth doesn’t need iniquitous natural selection. It needs premeditated moral algorithms conceived by logic that do the most good for the largest number of people. This is something that an AI will probably be better at than humans in less than two decade’s time.
(Reuters) — Alphabet Inc’s Google said on Thursday it was dissolving a council it had formed a week earlier to consider ethical issues around artificial intelligence and other emerging technologies.
The council had run into controversy over two of its members, according to online news portal Vox, which first reported the dissolution of the council.
The council, launched on March 26, was meant to provide recommendations for Google and other companies and researchers working in areas such as facial recognition software, a form of automation that has prompted concerns about racial bias and other limitations.
Google recently appointed an external ethics council to deal with tricky issues in artificial intelligence. The group is meant to help the company appease critics while still pursuing lucrative cloud computing deals.
In less than a week, the council is already falling apart, a development that may jeopardize Google’s chance of winning more military cloud-computing contracts.
On Saturday, Alessandro Acquisti, a behavioral economist and privacy researcher, said he won’t be serving on the council. While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work,’’ Acquisti said on Twitter. He didn’t respond to a request for comment.
https://paper.li/e-1437691924#/
Geoffrey Rockwell and Bettina Berendt’s (2017) article calls for ethical consideration around big data and digital archive, asking us to re-consider whether. In outlining how digital archives and algorithms structure potential relationships with whose testimony has been digitized, Rockwell and Berendt highlight how data practices change the relationship between research and researched. They make a provocative and important argument: datafication and open access should, in certain cases, be resisted. They champion the careful curation of data rather than large-scale collection of, pointing to the ways in which these data are used to construct knowledge about and fundamentally limit the agency of the research subject by controlling the narratives told about them. Rockwell and Berendt, drawing on Aboriginal Knowledge (AK) frameworks, amongst others, argue that some knowledge is just not meant to be openly shared: information is not an inherent good, and access to information must be earned instead. This approach was prompted, in part, by their own work scraping #gamergate Twitter feeds and the ways in which these data could be used to speak for others, in, without their consent.
From our vantage point, Rockwell and Berendt’s renewed call for an ethics of datafication is a timely one, as we are mired in media reports related to social media surveillance, electoral tampering, and on one side. Thanks, Facebook. On the other side, academics fight for the right to collect and access big data in order to reveal how gender and racial discrimination are embedded in the algorithms that structure everything from online real estate listings, to loan interest rates, to job postings (American Civil Liberties Union 2018). As surveillance studies scholars, we deeply appreciate how Rockwell and Berendt take a novel approach: they turn to a discussion of Freedom of Information (FOI), Freedom of Expression (FOE), Free and Open Source software, and Access to Information. In doing so, they unpack the assumptions commonly held by librarians, digital humanists and academics in general, to show that accumulation and datafication is not an inherent good.
Well, Wesley J Smith just did another hit piece against Transhumanism. https://www.nationalreview.com/corner/transhumanism-the-lazy…provement/
It’s full of his usual horrible attempts to justify his intelligent design roots while trying to tell people he doesn’t have any religious reasons for it. But, then again, what can you expect from something from the National Review.
Sometimes you have to laugh. In “Transhumanism and the Death of Human Exceptionalism,” published in Aero, Peter Clarke quotes criticism I leveled against transhumanism from a piece I wrote entitled, “The Transhumanist Bill of Wrongs” From my piece:
Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.