Archive for the ‘policy’ category
Apr 24, 2015
Posted by LHC Kritik in categories: astronomy, big data, computing, cosmology, energy, engineering, environmental, ethics, existential risks, futurism, general relativity, governance, government, gravity, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear, nuclear energy, open source, particle physics, philosophy, physics, policy, posthumanism, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treaties
Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction
Why the LHC must be shut down
What would you have done to stop catastrophic events if you knew in advance what you know now.
We have the moral obligation to take action in every way we can.
The future is in our hands. The stakes are the highest they have ever been. The Large Hadron Collider developed by the European Centre for Nuclear Research (CERN) is a dangerous instrument. The start-up April 5 has initiated a more reckless use of LHC’s capabilities.
Apr 24, 2015
Posted by LHC Kritik in categories: astronomy, big data, complex systems, computing, cosmology, energy, engineering, ethics, existential risks, futurism, general relativity, governance, government, gravity, hardware, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear, nuclear energy, particle physics, philosophy, physics, policy, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treaties
Jan 3, 2015
Posted by Rob Chamberlain in categories: architecture, augmented reality, automation, big data, business, complex systems, computing, cybercrime/malcode, disruptive technology, economics, encryption, engineering, ethics, finance, futurism, geopolitics, governance, government, human trajectories, information science, innovation, internet, law, law enforcement, military, neuroscience, philosophy, policy, privacy, robotics/AI, science, security, software, strategy, supercomputing, transhumanism, transparency
Quoted: “Tony Williams, the founder of the British-based legal consulting firm, said that law firms will see nearly all their process work handled by artificial intelligence robots. The robotic undertaking will revolutionize the industry, “completely upending the traditional associate leverage model.” And: “The report predicts that the artificial intelligence technology will replace all the work involving processing information, along with a wide variety of overturned policies.”
Read the article here > https://hacked.com/legal-consulting-firm-believes-artificial-intelligence-replace-lawyers-2030/
Nov 23, 2014
Posted by Rob Chamberlain in categories: automation, big data, biotech/medical, bitcoin, business, complex systems, computing, disruptive technology, economics, encryption, energy, engineering, ethics, finance, futurism, geopolitics, government, hacking, hardware, human trajectories, information science, innovation, internet, journalism, law, materials, military, neuroscience, open access, open source, philosophy, physics, policy, privacy, science, scientific freedom, security, software, supercomputing, transparency
Quoted: “Ethereum will also be a decentralised exchange system, but with one big distinction. While Bitcoin allows transactions, Ethereum aims to offer a system by which arbitrary messages can be passed to the blockchain. More to the point, these messages can contain code, written in a Turing-complete scripting language native to Ethereum. In simple terms, Ethereum claims to allow users to write entire programs and have the blockchain execute them on the creator’s behalf. Crucially, Turing-completeness means that in theory any program that could be made to run on a computer should run in Ethereum.” And, quoted: “As a more concrete use-case, Ethereum could be utilised to create smart contracts, pieces of code that once deployed become autonomous agents in their own right, executing pre-programmed instructions. An example could be escrow services, which automatically release funds to a seller once a buyer verifies that they have received the agreed products.”
Read Part One of this Series here » Ethereum — Bitcoin 2.0? And, What Is Ethereum.
Read Part Two of this Series here » Ethereum — Opportunities and Challenges.
Read Part Three of this Series here » Ethereum — A Summary.
Nov 20, 2014
Posted by Rob Chamberlain in categories: automation, big data, bitcoin, business, complex systems, computing, disruptive technology, economics, encryption, engineering, ethics, geopolitics, government, hacking, hardware, information science, innovation, law, materials, open access, open source, philosophy, policy, polls, privacy, science, security, software, supercomputing, transparency, treaties
Quoted: “Bitcoin technology offers a fundamentally different approach to vote collection with its decentralized and automated secure protocol. It solves the problems of both paper ballot and electronic voting machines, enabling a cost effective, efficient, open system that is easily audited by both individual voters and the entire community. Bitcoin technology can enable a system where every voter can verify that their vote was counted, see votes for different candidates/issues cast in real time, and be sure that there is no fraud or manipulation by election workers.”
Read the article here » http://www.entrepreneur.com/article/239809?hootPostID=ba473f.….aacc8412c7
Nov 19, 2014
Posted by Rob Chamberlain in categories: automation, big data, biotech/medical, bitcoin, business, complex systems, computing, disruptive technology, economics, education, encryption, engineering, environmental, ethics, finance, futurism, geopolitics, hacking, information science, law, materials, open access, policy, science, security, software, supercomputing, transparency
Quoted: “The Factom team suggested that its proposal could be leveraged to execute some of the crypto 2.0 functionalities that are beginning to take shape on the market today. These include creating trustless audit chains, property title chains, record keeping for sensitive personal, medical and corporate materials, and public accountability mechanisms.
During the AMA, the Factom president was asked how the technology could be leveraged to shape the average person’s daily life.”
“Factom creates permanent records that can’t be changed later. In a Factom world, there’s no more robo-signing scandals. In a Factom world, there are no more missing voting records. In a Factom world, you know where every dollar of government money was spent. Basically, the whole world is made up of record keeping and, as a consumer, you’re at the mercy of the fragmented systems that run these records.”
Nov 17, 2014
Posted by Rob Chamberlain in categories: big data, bitcoin, business, complex systems, computing, disruptive technology, economics, electronics, encryption, engineering, ethics, finance, futurism, geopolitics, hacking, human trajectories, information science, innovation, internet, law, materials, media & arts, military, open access, open source, policy, privacy, science, scientific freedom, security, software, supercomputing
Preamble: Bitcoin 1.0 is currency — the deployment of cryptocurrencies in applications related to cash such as currency transfer, remittance, and digital payment systems. Bitcoin 2.0 is contracts — the whole slate of economic, market, and financial applications using the blockchain that are more extensive than simple cash transactions like stocks, bonds, futures, loans, mortgages, titles, smart property, and smart contracts
Bitcoin 3.0 is blockchain applications beyond currency, finance, and markets, particularly in the areas of government, health, science, literacy, culture, and art.
Read the article here » http://ieet.org/index.php/IEET/more/swan20141110
Oct 1, 2014
Posted by Steve Fuller in categories: aging, biological, bionic, biotech/medical, ethics, futurism, genetics, homo sapiens, human trajectories, life extension, philosophy, policy, transhumanism
What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.
Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.
However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.
Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?
Sep 29, 2014
Posted by Steve Fuller in categories: ethics, genetics, government, law, philosophy, policy, science
In 1906 the great American pragmatist philosopher William James delivered a public lecture entitled, ‘The Moral Equivalent of War’. James imagined a point in the foreseeable future when states would rationally decide against military options to resolve their differences. While he welcomed this prospect, he also believed that the abolition of warfare would remove an important pretext for people to think beyond their own individual survival and toward some greater end, perhaps one that others might end up enjoying more fully. What then might replace war’s altruistic side?
It is telling that the most famous political speech to adopt James’ title was US President Jimmy Carter’s 1977 call for national energy independence in response to the Arab oil embargo. Carter characterised the battle ahead as really about America’s own ignorance and complacency rather than some Middle Eastern foe. While Carter’s critics pounced on his trademark moralism, they should have looked instead to his training as a nuclear scientist. Historically speaking, nothing can beat a science-led agenda to inspire a long-term, focused shift in a population’s default behaviours. Louis Pasteur perhaps first exploited this point by declaring war on the germs that he had shown lay behind not only human and animal disease but also France’s failing wine and silk industries. Moreover, Richard Nixon’s ‘war on cancer’, first declared in 1971, continues to be prosecuted on the terrain of genomic medicine, even though arguably a much greater impact on the human condition could have been achieved by equipping the ongoing ‘war on poverty’ with comparable resources and resoluteness.
Science’s ability to step in as war’s moral equivalent has less to do with whatever personal authority scientists command than with the universal scope of scientific knowledge claims. Even if today’s science is bound to be superseded, its import potentially bears on everyone’s life. Once that point is understood, it is easy to see how each person could be personally invested in advancing the cause of scientific research. In the heyday of the welfare state, that point was generally understood. Thus, in The Gift Relationship, perhaps the most influential work in British social policy of the past fifty years, Richard Titmuss argued, by analogy with voluntary blood donation, that citizens have a duty to participate as research subjects, but not because of the unlikely event that they might directly benefit from their particular experiment. Rather, citizens should participate because they would have already benefitted from experiments involving their fellow citizens and will continue to benefit similarly in the future.
However, this neat fit between science and altruism has been undermined over the past quarter-century on two main fronts. One stems from the legacy of Nazi Germany, where the duty to participate in research was turned into a vehicle to punish undesirables by studying their behaviour under various ‘extreme conditions’. Indicative of the horrific nature of this research is that even today few are willing to discuss any scientifically interesting results that might have come from it. Indeed, the pendulum has swung the other way. Elaborate research ethics codes enforced by professional scientific bodies and university ‘institutional review boards’ protect both scientist and subject in ways that arguably discourage either from having much to do with the other. Even defenders of today’s ethical guidelines generally concede that had such codes been in place over the past two centuries, science would have progressed at a much slower pace.