Category: alien life – Page 153
By Jason Dorrier — Singularity Hub
When scientists looked at Mars through early telescopes, they saw a fuzzy, rust-colored globe scored by mysterious dark gashes some believed were alien canals. Later, armed with sharper images, we scoffed at such naiveté. Mars is obviously dry as a bone and uninhabited. Now, with a great deal more information from rovers and satellites, we believe Mars was once wet. As for life? The jury’s still out.
It shows how much we still have to learn (and are learning) about our solar system. Not too long ago, we only suspected one ocean of liquid water beyond Earth (on Europa). Now, thanks to robotic explorers, like NASA’s Dawn and Cassini missions, we’re finding evidence of oceans throughout the solar system. Read more
Jason Koebler — Motherboard
It’s not easy convincing the world you’ve found aliens. But that’s what one British professor says he’s done, over and over again. His latest proof, he tells me, is his strongest yet. Should we take him seriously?
In fall of 2013, Milton Wainwright, a researcher at the University of Sheffield in the United Kingdom, made international headlines when he claimed that microorganisms he found in the stratosphere were not of this world. The organisms are believed to come from a class of algae called diatoms, were collected roughly 16 miles above the Earth’s surface using a balloon, and, according to Wainwright, have been raining down on the Earth, carried by meteorites, for perhaps many millennia. Read More
New Book: An Irreverent Singularity Funcyclopedia, by Mondo 2000’s R.U. Sirius.
Posted in 3D printing, alien life, automation, big data, bionic, bioprinting, biotech/medical, complex systems, computing, cosmology, cryptocurrencies, cybercrime/malcode, cyborgs, defense, disruptive technology, DNA, driverless cars, drones, economics, electronics, encryption, energy, engineering, entertainment, environmental, ethics, existential risks, exoskeleton, finance, first contact, food, fun, futurism, general relativity, genetics, hacking, hardware, human trajectories, information science, innovation, internet, life extension, media & arts, military, mobile phones, nanotechnology, neuroscience, nuclear weapons, posthumanism, privacy, quantum physics, robotics/AI, science, security, singularity, software, solar power, space, space travel, supercomputing, time travel, transhumanism
Quoted: “Legendary cyberculture icon (and iconoclast) R.U. Sirius and Jay Cornell have written a delicious funcyclopedia of the Singularity, transhumanism, and radical futurism, just published on January 1.” And: “The book, “Transcendence – The Disinformation Encyclopedia of Transhumanism and the Singularity,” is a collection of alphabetically-ordered short chapters about artificial intelligence, cognitive science, genomics, information technology, nanotechnology, neuroscience, space exploration, synthetic biology, robotics, and virtual worlds. Entries range from Cloning and Cyborg Feminism to Designer Babies and Memory-Editing Drugs.” And: “If you are young and don’t remember the 1980s you should know that, before Wired magazine, the cyberculture magazine Mondo 2000 edited by R.U. Sirius covered dangerous hacking, new media and cyberpunk topics such as virtual reality and smart drugs, with an anarchic and subversive slant. As it often happens the more sedate Wired, a watered-down later version of Mondo 2000, was much more successful and went mainstream.”
Read the article here >https://hacked.com/irreverent-singularity-funcyclopedia-mondo-2000s-r-u-sirius/
Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism
Posted in alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism
Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.
Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.
I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.
But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.
Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.
Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.
In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.
Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.
Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.
Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?
The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.
By Clément Vidal — Vrije Universiteit Brussel, Belgium.
I am happy to inform you that I just published a book which deals at length with our cosmological future. I made a short book trailer introducing it, and the book has been mentioned in the Huffington Post and H+ Magazine.

About the book:
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end, or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe’s deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution, and intelligence from a novel cosmological framework that should stir debate for years to come.
http://clement.vidal.philosophons.com
You can get 20% off with the discount code ‘Vidal2014′ (valid until 31st July)!
Kurweil AI
“It seems highly unlikely that we are alone.”
There are some 100 million other places in the Milky Way galaxy that could support life above the microbial level, reports a group of astronomers in the journal Challenges (open access), based on a new computation method to examine data from planets orbiting other stars in the universe.
“This study does not indicate that complex life exists on that many planets; we’re saying that there are planetary conditions that could support it, according to the paper’s authors*. “Complex life doesn’t mean intelligent life — though it doesn’t rule it out or even animal life — but simply that organisms larger and more complex than microbes could exist in a number of different forms,” the researchers explain.
The Huffington Post by Dominique Mosberge
Aliens almost definitely exist.
At least, that’s what two astronomers told Congress this week, as they appealed for continued funding to research life beyond Earth.
According to ABC News, Dan Werthimer, director of the SETI [search for extraterrestrial intelligence] Research Center at the University of California, Berkeley, told the House Committee on Science, Space and Technology Wednesday that the possibility of extraterrestrial microbial life is “close to 100 percent.”
Book Review: The Human Race to the Future by Daniel Berleant (2013) (A Lifeboat Foundation publication)
Posted in alien life, asteroid/comet impacts, biotech/medical, business, climatology, disruptive technology, driverless cars, drones, economics, education, energy, engineering, ethics, evolution, existential risks, food, futurism, genetics, government, habitats, hardware, health, homo sapiens, human trajectories, information science, innovation, life extension, lifeboat, nanotechnology, neuroscience, nuclear weapons, philosophy, policy, posthumanism, robotics/AI, science, scientific freedom, security, singularity, space, space travel, sustainability, transhumanism
From CLUBOF.INFO
The Human Race to the Future (2014 Edition) is the scientific Lifeboat Foundation think tank’s publication first made available in 2013, covering a number of dilemmas fundamental to the human future and of great interest to all readers. Daniel Berleant’s approach to popularizing science is more entertaining than a lot of other science writers, and this book contains many surprises and useful knowledge.
Some of the science covered in The Human Race to the Future, such as future ice ages and predictions of where natural evolution will take us next, is not immediately relevant in our lives and politics, but it is still presented to make fascinating reading. The rest of the science in the book is very linked to society’s immediate future, and deserves great consideration by commentators, activists and policymakers because it is only going to get more important as the world moves forward.
The book makes many warnings and calls for caution, but also makes an optimistic forecast about how society might look in the future. For example, It is “economically possible” to have a society where all the basics are free and all work is essentially optional (a way for people to turn their hobbies into a way of earning more possessions) (p. 6–7).
A transhumanist possibility of interest in The Human Race to the Future is the change in how people communicate, including closing the gap between thought and action to create instruments (maybe even mechanical bodies) that respond to thought alone. The world may be projected to move away from keyboards and touchscreens towards mind-reading interfaces (p. 13–18). This would be necessary for people suffering from physical disabilities, and for soldiers in the arms race to improve response times in lethal situations.
To critique the above point made in the book, it is likely that drone operators and power-armor wearers in future armies would be very keen to link their brains directly to their hardware, and the emerging mind-reading technology would make it possible. However, there is reason to doubt the possibility of effective teamwork while relying on such interfaces. Verbal or visual interfaces are actually more attuned to people as a social animal, letting us hear or see our colleagues’ thoughts and review their actions as they happen, which allows for better teamwork. A soldier, for example, may be happy with his own improved reaction times when controlling equipment directly with his brain, but his fellow soldiers and officers may only be irritated by the lack of an intermediate phase to see his intent and rescind his actions before he completes them. Some helicopter and vehicle accidents are averted only by one crewman seeing another’s error, and correcting him in time. If vehicles were controlled by mind-reading, these errors would increasingly start to become fatal.
Reading and research is also an area that could develop in a radical new direction unlike anything before in the history of communication. The Human Race to the Future speculates that beyond articles as they exist now (e.g. Wikipedia articles) there could be custom-generated articles specific to the user’s research goal or browsing. One’s own query could shape the layout and content of each article, as it is generated. This way, reams of irrelevant information will not need to be waded through to answer a very specific query (p. 19–24).
Greatly similar to the same view I have written works expressing, the book sees industrial civilization as being burdened above all by too much centralization, e.g. oil refineries. This endangers civilization, and threatens collapse if something should later go wrong (p. 32, 33). For example, an electromagnetic pulse (EMP) resulting from a solar storm could cause serious damage as a result of the centralization of electrical infrastructure. Digital sabotage could also threaten such infrastructure (p. 34, 35).
The solution to this problem is decentralization, as “where centralization creates vulnerability, decentralization alleviates it” (p. 37). Solar cells are one example of decentralized power production (p. 37–40), but there is also much promise in home fuel production using such things as ethanol and biogas (p. 40–42). Beyond fuel, there is also much benefit that could come from decentralized, highly localized food production, even “labor-free”, and “using robots” (p. 42–45). These possibilities deserve maximum attention for the sake of world welfare, considering the increasing UN concerns about getting adequate food and energy supplies to the growing global population. There should not need to be a food vs. fuel debate, as the only acceptable solution can be to engineer solutions to both problems. An additional option for increasing food production is artificial meat, which should aim to replace the reliance on livestock. Reliance on livestock has an “intrinsic wastefulness” that artificial meat does not have, so it makes sense for artificial meat to become the cheapest option in the long run (p. 62–65). Perhaps stranger and more profound is the option of genetically enhancing humans to make better use of food and other resources (p. 271–274).
On a related topic, sequencing our own genome may be able to have “major impacts, from medicine to self-knowledge” (p. 46–51). However, the book does not contain mention of synthetic biology and the potential impacts of J. Craig Venter’s work, as explained in such works as Life at the Speed of Light. This could certainly be something worth adding to the story, if future editions of the book aim to include some additional detail.
At least related to synthetic biology is the book’s discussion of genetic engineering of plants to produce healthier or more abundant food. Alternatively, plants could be genetically programmed to extract metal compounds from the soil (p. 213–215). However, we must be aware that this could similarly lead to threats, such as “superweeds that overrun the world” similar to the flora in John Wyndam’s Day of the Triffids (p. 197–219). Synthetic biology products could also accidentally expose civilization to microorganisms with unknown consequences, perhaps even as dangerous as alien contagions depicted in fiction. On the other hand, they could lead to potentially unlimited resources, with strange vats of bacteria capable of manufacturing oil from simple chemical feedstocks. Indeed, “genetic engineering could be used to create organic prairies that are useful to humans” (p. 265), literally redesigning and upgrading our own environment to give us more resources.
The book advocates that politics should focus on long-term thinking, e.g. to deal with global warming, and should involve “synergistic cooperation” rather than “narrow national self-interest” (p. 66–75). This is a very important point, and may coincide with the complex prediction that nation states in their present form are flawed and too slow-moving. Nation-states may be increasingly incapable of meeting the challenges of an interconnected world in which national narratives produce less and less legitimate security thinking and transnational identities become more important.
Close to issues of security, The Human Race to the Future considers nuclear proliferation, and sees that the reasons for nuclear proliferation need to be investigated in more depth for the sake of simply by reducing incentives. To avoid further research, due to thinking that it has already been sufficiently completed, is “downright dangerous” (p. 89–94). Such a call is certainly necessary at a time when there is still hostility against developing countries with nuclear programs, and this hostility is simply inflammatory and making the world more dangerous. To a large extent, nuclear proliferation is inevitable in a world where countries are permitted to bomb one another because of little more than suspicions and fears.
Another area covered in this book that is worth celebrating is the AI singularity, which is described here as meaning the point at which a computer is sophisticated enough to design a more powerful computer than itself. While it could mean unlimited engineering and innovation without the need for human imagination, there are also great risks. For example, a “corporbot” or “robosoldier,” determined to promote the interests of an organization or defeat enemies, respectively. These, as repeatedly warned through science fiction, could become runaway entities that no longer listen to human orders (p. 83–88, 122–127).
A more distant possibility explored in Berleant’s book is the colonization of other planets in the solar system (p. 97–121, 169–174). There is the well-taken point that technological pioneers should already be trying to settle remote and inhospitable locations on Earth, to perfect the technology and society of self-sustaining settlements (Antarctica?) (p.106). Disaster scenarios considered in the book that may necessitate us moving off-world in the long term include a hydrogen sulfide poisoning apocalypse (p. 142–146) and a giant asteroid impact (p. 231–236)
The Human Race to the Future is a realistic and practical guide to the dilemmas fundamental to the human future. Of particular interest to general readers, policymakers and activists should be the issues that concern the near future, such as genetic engineering aimed at conservation of resources and the achievement of abundance.
TED