Cortese Franco – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 05 Jun 2017 03:30:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Human Destiny is to Eliminate Death — Essays, Rants & Arguments on Immortalism (Edited Volume) https://lifeboat.com/blog/2013/07/human-destiny-is-to-eliminate-death-essays-rants-arguments-on-immortalism-edited-volume Wed, 03 Jul 2013 16:28:19 +0000 http://lifeboat.com/blog/?p=8540 coveroriginalhankImmortal Life has complied an edited volume of essays, arguments, and debates about Immortalism titled Human Destiny is to Eliminate Death from many esteemed ImmortalLife.info Authors (a good number of whom are also Lifeboat Foundation Advisory Board members as well), such as Martine Rothblatt (Ph.D, MBA, J.D.), Marios Kyriazis (MD, MS.c, MI.Biol, C.Biol.), Maria Konovalenko (M.Sc.), Mike Perry (Ph.D), Dick Pelletier, Khannea Suntzu, David Kekich (Founder & CEO of MaxLife Foundation), Hank Pellissier (Founder of Immortal Life), Eric Schulke & Franco Cortese (the previous Managing Directors of Immortal Life), Gennady Stolyarov II, Jason Xu (Director of Longevity Party China and Longevity Party Taiwan), Teresa Belcher, Joern Pallensen and more. The anthology was edited by Immortal Life Founder & Senior Editor, Hank Pellissier.

This one-of-a-kind collection features ten debates that originated at ImmortalLife.info, plus 36 articles, essays and diatribes by many of IL’s contributors, on topics from nutrition to mind-filing, from teleomeres to “Deathism”, from libertarian life-extending suggestions to religion’s role in RLE to immortalism as a human rights issue.

The book is illustrated with famous paintings on the subject of aging and death, by artists such as Goya, Picasso, Cezanne, Dali, and numerous others.

The book was designed by Wendy Stolyarov; edited by Hank Pellissier; published by the Center for Transhumanity. This edited volume is the first in a series of quarterly anthologies planned by Immortal Life

Find it on Amazon HERE and on Smashwords HERE

This Immortal Life Anthology includes essays, articles, rants and debates by and between some of the leading voices in Immortalism, Radical Life-Extension, Superlongevity and Anti-Aging Medicine.

A (Partial) List of the Debaters & Essay Contributors:

Martine Rothblatt Ph.D, MBA, J.D. — inventor of satellite radio, founder of Sirius XM and founder of the Terasem Movement, which promotes technological immortality. Dr. Rothblatt is the author of books on gender freedom (Apartheid of Sex, 1995), genomics (Unzipped Genes, 1997) and xenotransplantation (Your Life or Mine, 2003).

Marios Kyriazis MD, MSc, MIBiol, CBiol. founded the British Longevity Society, was the first to address the free-radical theory of aging in a formal mainstream UK medical journal, has authored dozens of books on life-extension and has discussed indefinite longevity in 700 articles, lectures and media appearances globally.

Maria Konovalenko is a molecular biophysicist and the program coordinator for the Science for Life Extension Foundation. She earned her M.Sc. degree in Molecular Biological Physics at the Moscow Institute of Physics and Technology. She is a co-founder of the International Longevity Alliance.

Jason Xu is the director of Longevity Party China and Longevity Party Taiwan, and he was an intern at SENS.

Mike Perry, PhD. has worked for Alcor since 1989 as Care Services Manager. He has authored or contributed to the automated cooldown and perfusion modeling programs. He is a regular contributor to Alcor newsletters. He has been a member of Alcor since 1984.

David A. Kekich, Founder, President & C.E.O Maximum Life Extension Foundation, works to raise funds for life-extension research. He serves as a Board Member of the American Aging Association, Life Extension Buyers’ Club and Alcor Life Extension Foundation Patient Care Trust Fund. He authored Smart, Strong and Sexy at 100?, a how-to book for extreme life extension.

Eric Schulke is the founder of the Movement for Indefinite Life Extension (MILE). He was a Director, Teams Coordinator and ran Marketing & Outreach at the Immortality Institute, now known as Longecity, for 4 years. He is the Co-Managing Director of Immortal Life.

Hank Pellissier is the Founder & Senior Editor of ImmortaLife.info. Previously, he was the founder/director of Transhumanity.net. Before that, he was Managing Director of the Institute for Ethics and Emerging Technology (ieet.org). He’s written over 120 futurist articles for IEET, Hplusmagazine.com, Transhumanity.net, ImmortalLife.info and the World Future Society.

Franco Cortese is on the Advisory Board for Lifeboat Foundation on their Scientific Advisory Board (Life-Extension Sub-Board) and their Futurism Board. He is the Co-Managing Director alongside of Immortal Life and a Staff Editor for Transhumanity. He has written over 40 futurist articles and essays for H+ Magazine, The Institute for Ethics & Emerging Technologies, Immortal Life, Transhumanity and The Rational Argumentator.

Gennady Stolyarov II is a Staff Editor for Transhumanity, Contributor to Enter Stage Right, Le Quebecois Libre, Rebirth of Reason, Ludwig von Mises Institute, Senior Writer for The Liberal Institute, and Editor-in-Chief of The Rational Argumentator.

Brandon King is Co-Director of the United States Longevity Party.

Khannea Suntzu is a transhumanist and virtual activist, and has been covered in articles in Le Monde, CGW and Forbes.

Teresa Belcher is an author, blogger, Buddhist, consultant for anti-aging, life extension, healthy life style and happiness, and owner of Anti-Aging Insights.

Dick Pelletier is a weekly columnist who writes about future science and technologies for numerous publications.

Joern Pallensen has written articles for Transhumanity and the Institute for Ethics and Emerging Technologies.

CONTENTS:

Editor’s Introduction

DEBATES

1. In The Future, With Immortality, Will There Still Be Children?

2. Will Religions promising “Heaven” just Vanish, when Immortality on Earth is attained?

3. In the Future when Humans are Immortal — what will happen to Marriage?

4. Will Immortality Change Prison Sentences? Will Execution and Life-Behind-Bars be… Too Sadistic?

5. Will Government Funding End Death, or will it be Attained by Private Investment?

6. Will “Meatbag” Bodies ever be Immortal? Is “Cyborgization” the only Logical Path?

7. When Immortality is Attained, will People be More — or Less — Interested in Sex?

8. Should Foes of Immortality be Ridiculed as “Deathists” and “Suicidalists”?

9. What’s the Best Strategy to Achieve Indefinite Life Extension?

ESSAYS

1. Maria Konovalenko:

I am an “Aging Fighter” Because Life is the Main Human Right, Demand, and Desire

2. Mike Perry:

Deconstructing Deathism — Answering Objections to Immortality

3. David A. Kekich:

How Old Are You Now?

4. David A. Kekich:

Live Long… and the World Prospers

5. David A. Kekich:

107,000,000,000 — what does this number signify?

6. Franco Cortese:

Religion vs. Radical Longevity: Belief in Heaven is the Biggest Barrier to Eternal Life?!

7. Dick Pelletier:

Stem Cells and Bioprinters Take Aim at Heart Disease, Cancer, Aging

8. Dick Pelletier:

Nanotech to Eliminate Disease, Old Age; Even Poverty

9. Dick Pelletier:

Indefinite Lifespan Possible in 20 Years, Expert Predicts

10. Dick Pelletier:

End of Aging: Life in a World where People no longer Grow Old and Die

11. Eric Schulke:

We Owe Pursuit of Indefinite Life Extension to Our Ancestors

12. Eric Schulke:

Radical Life Extension and the Spirit at the core of a Human Rights Movement

13. Eric Schulke:

MILE: Guide to the Movement for Indefinite Life Extension

14. Gennady Stolyarov II:

The Real War and Why Inter-Human Wars Are a Distraction

15. Gennady Stolyarov II:

The Breakthrough Prize in Life Sciences — turning the tide for life extension

16. Gennady Stolyarov II:

Six Libertarian Reforms to Accelerate Life Extension

17. Hank Pellissier:

Wake Up, Deathists! — You DO Want to LIVE for 10,000 Years!

18. Hank Pellissier:

Top 12 Towns for a Healthy Long Life

19. Hank Pellissier:

This list of 30 Billionaires — Which One Will End Aging and Death?

20. Hank Pellissier:

People Who Don’t Want to Live Forever are Just “Suicidal”

21. Hank Pellissier:

Eluding the Grim Reaper with 23andMe.com

22. Hank Pellissier:

Sixty Years Old — is my future short and messy, or long and glorious?

23. Jason Xu:

The Unstoppable Longevity Virus

24. Joern Pallensen:

Vegetarians Live Longer, Happier Lives

25. Franco Cortese:

Killing Deathist Cliches: Death to “Death-Gives-Meaning-to-Life”

26. Marios Kyriazis:

Environmental Enrichment — Practical Steps Towards Indefinite Lifespans

27. Khannea Suntzu:

Living Forever — the Biggest Fear in the most Audacious Hope

28. Martine Rothblatt:

What is Techno-Immortality?

29. Teresa Belcher:

Top Ten Anti-Aging Supplements

30. Teresa Belcher:

Keep Your Brain Young! — tips on maintaining healthy cognitive function

31. Teresa Belcher:  

Anti-Aging Exercise, Diet, and Lifestyle Tips

32. Teresa Belcher:

How Engineered Stem Cells May Enable Youthful Immortality

33. Teresa Belcher:

Nanomedicine — an Introductory Explanation

34. Rich Lee:

“If Eternal Life is a Medical Possibility, I Will Have It Because I Am A Tech Pirate”

35. Franco Cortese:

Morality ==> Immortality

36. Franco Cortese:

Longer Life or Limitless Life?

]]>
Help Conquer Death with Grants & Research Funding from LongeCity! https://lifeboat.com/blog/2013/06/help-conquer-death-with-grants-research-funding-from-longecity https://lifeboat.com/blog/2013/06/help-conquer-death-with-grants-research-funding-from-longecity#comments Mon, 17 Jun 2013 07:55:08 +0000 http://lifeboat.com/blog/?p=8486 LongeCity has been doing advocacy and research for indefinite life extension since 2002. With the Methuselah Foundation and the M-Prize’s rise in prominence and public popularity over the past few years, it is sometimes easy to forget the smaller-scale research initiatives implemented by other organizations.

LongeCity seeks to conquer the involuntary blight of death through advocacy and research. They award small grants to promising small-scale research initiatives focused on longevity. The time to be doing this is now, with the increasing popularity and public awareness of Citizen Science growing. The 2020 H+ Conference’s theme was The Rise of the Citizen Scientist. Open –Source and Bottom-Up organization have been hallmarks of the H+ and TechProg communities for a while now, and the rise of citizen science parallels this trend.

Anyone can have a great idea, and there are many low-hanging fruits that can provide immense value and reward to the field of life extension without necessitating large-scale research initiatives, expensive and highly-trained staff or costly laboratory equipment. These low-hanging fruit can provide just as much benefit as large scale ones – and, indeed, even have the potential to provide more benefit per unit of funding than large-scale ones. They don’t call them low-hanging fruit for nothing – they are, after all, potentially quite fruitful.


In the past LongeCity has raised funding by matching donations made by the community to fund a research project that used lasers to ablate (i.e. remove) cellular lipofuscin. LongeCity raised $8,000 dollars by the community which was then matched by up to $16,000 by SENS Founation. A video describing the process can be found here.  In the end they raised over $18,000 towards this research! Recall that one of Aubrey’s strategies of SENS is to remove cellular lipofuscin via genetically engineered bacteria. Another small-scale research project funded by LongeCity involved mitochondrial uncoupling in nematodes. To see more about this research success, see here.

LongeCity’s second successfully funded research initiative was Mitochondrial uncoupling . More information can be found here. This project studied the benefits of transplanting microglia in the aging nervous system.

LongeCity’s 3rd success was their project on Microglia Stem Cells in 2010. The full proposal can be found here, and more information on this second successful  LongeCity research initiative can be found here.

LongeCity’s fourth research-funding success was on Cryonics in 2012, specifically uncovering the mechanisms of cryoprotectant toxicity.

These are real projects with real benefits that LongeCity is funding. Even if you’re not a research scientist, you can have an impact on the righteous plight to end the involuntary blight of death, by applying for a small-scale research grant from LongeCity. What have you got to lose? Really? Because it seems to me that you have just about everything to gain.

LongeCity has also contributed toward larger scale research and development initiatives in the past as well. They have sponsored projects by Alcor, SENS Foundation and Methuselah Foundation. They crowdsourced a longevity-targeted multivitamin supplement called VIMMORTAL based on bottom-up-style community suggestion and deliberation (one of the main benefits of crowdsourcing).

So? Are you interested in impacting the movement toward indefinite life extension? Then please take a look at the various types of projects listed below that LongeCity might be interested in funding.

—   —   —   —   —   —   —   —   —

The following types of projects can be supported:

• Science support:  contribution to a scientific experiment that can be carried out in a short period of time with limited resources. The experiment should be distinguishable from the research that is already funded by other sources. This could be a side-experiment in an existing programme, a pilot experiment to establish feasibility, or resources for an undergrad or high-school student.

• Chapters support: organizing a local meeting with other LongeCity members or potential members. LongeCity could contribute to the room hire, the expenses of inviting a guest speaker or even the bar tab.

• Travel support: attendance at conferences, science fairs etc. where you are presenting on a topic relevant to LongeCity. Generally this will involve some promotion of the mission and/or a report on the then conference to be shared with our Members

• Grant writing:

Bring together a team of scientists and help them write a successful grant application to a public or private funding body. Depending on the project, the award will be a success premium or sometimes can cover the costs of grant preparation itself.

• Micro matching fundraiser:

If you manage to raise funds on a mission-relevant topic, LongeCity will match the funds raised. (In order to initiate one of these initiatives LongeCity usually also requires that the fundraiser spends at least 500 ‘ThankYou points’ but this requirement can be waived in specific circumstances.)

• Outreach:

Support for a specific initiative raising public awareness of the mission or of a topic relevant to our mission. This could be a local event, a specific, organized direct marketing initiative or a media feature.

• Articles:

Write a featured article for the LongeCity website on a topic of interest to our members or visitors. LongeCity is mainly looking for articles on scientific topics, but well-researched contributions on a relevant topic in policy, law, or philosophy are also welcome.

Grant Size:

‘micro grants’ — up to $180

‘small grants’ — up to $500

Grant applications exceeding $500 can be received, but will not be evaluated conclusively under the small grants scheme. Instead, LongeCity will review the application as draft and may invite a full application afterward.

Decisions as part of the small grants programme are usually pretty quick and straightforward. However please contact LongeCity with a proposal ahead of time, as they will not normally consider applications where the money has already been spent!

Proposals can be as short or elaborate as necessary, but normally should be about half a page long.

Only LongeCity Members can apply, but any Member is free to apply on behalf of someone else — thus, non-Members are welcome to find a Member to ‘sponsor’ their application.

Please email [email protected] with your proposal.

You can also use the ideas forum to prepare the proposal. For general questions, or to discuss the proposal informally, feel free to contact LongeCity at the above email.

—   —   —   —   —   —   —   —   —

]]>
https://lifeboat.com/blog/2013/06/help-conquer-death-with-grants-research-funding-from-longecity/feed 1
Intimations of Imitations: Visions of Cellular Prosthesis & Functionally-Restorative Medicine https://lifeboat.com/blog/2013/06/intimations-of-imitations-a-roadmap-to-cellular-prosthesis-on-the-horizon https://lifeboat.com/blog/2013/06/intimations-of-imitations-a-roadmap-to-cellular-prosthesis-on-the-horizon#comments Mon, 10 Jun 2013 18:31:17 +0000 http://lifeboat.com/blog/?p=8443 In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.

In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e. ion-channels ion-pumps and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e. lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension however, and the field is to my knowledge yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion-Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purpose of the indefinite-longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.

While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 70’s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically-embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.

The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity and permeability throughout the 1980’s, 1990’s and 2000’s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and in lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically-active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].

Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpse of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to, rather than replace existing biological neurons, become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion general had already been conceived (and successfully created/experimentally-verified, though presumably not integrated in-vivo).

The field of Functionally-Restorative Medicine (and the orphan sub-field of whole-brain-gradual-substrate-replacement, or “physically-embodied” brain-emulation if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels and ion pumps) by MEMS (micro-electrocal-mechanical-systems) or more likely NEMS (nano-electro-mechanical systems).

The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g. bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e. dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological and experimental infrastructures developed for the fields of Synthetic

Ion-Channels and Ion-Channel/Artificial-Membrane-Reconstitution can be utilized for the purpose of a.) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional & operational modality to their normal biological counterparts) and/or b.) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.

Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in-vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular-prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in-vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.

A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.

For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion-Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.

First Survey

Unimolecular ion channels:

Examples include a.) synthetic ion channels with oligocrown ionophores, [5] b.) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and c.) modified varieties of the b-helical scaffold of gramicidin A [7]

Barrel-stave supramolecules:

Examples of this general class falling include avoltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].

Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:

Examples of this sub-class include synthetic ion channels formed by a.) macrocyclic, branched and linear bolaamphiphiles and dimeric steroids, [9] and by b.) non-peptide macrocycles, acyclic analogs and peptide macrocycles [respectively] containing abiotic amino acids [10].

Dimeric steroid staves:

Examples of this sub-class include channels using polydroxylated norcholentriol dimer [11].

pOligophenyls as staves in rigid rod b barrels:

Examples of this sub-class include “cylindrical self-assembly of rigid-rod b-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].

Synthetic Polymers:

Examples of this sub-class include synthetic ion channels and pores comprised of a.) polyalanine, b.) polyisocyanates, c.) polyacrylates, [13] formed by i.) ionophoric, ii.) ‘smart’ and iii.) cationic polymers [14]; d.) surface-attached poly(vinyl-n-alkylpyridinium) [15]; e.) cationic oligo-polymers [16] and f.) poly(m-phenylene ethylenes) [17].

Helical b-peptides (used as staves in barrel-stave method):

Examples of this class include: a.) cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].

Monomeric steroids:

Examples of this sub-class falling include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics [that] may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21] and supramolecular ion channels [22].

Complex minimalist systems:

Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self assemble into supramolecular pores or exhibit antibiotic activity [25].

Non-peptide macrocycles as hoops:

Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].

Peptide macrocycles as hoops and staves:

Examples of this sub-class include a.) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic b-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; b.) synthetic carriers, antibiotics (and analogs) and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. [Certain] macrocycles may act as b-sheets, possibly as staves of b-barrel-like pores [29]; c.) bioengineered pores as sensors. Covalent capturing and fragmentations [have been] observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].

Summary

Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized and refined.

Second Survey

The following selections [31] illustrate the chemical, structural and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration and experimental-verification already existent. Permission to use all the following selections and figures were obtained from the author of the source.

There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:


A: unimolecular macromolecules,
B: complex barrel-stave,
C: barrel-rosette,
D: barrel hoop, and
E: micellar supramolecules.

Cation Conducting Channels:

UNIMOLECULAR

“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].

“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono- (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation selective and voltage-dependent” [34].

 

“Later, Kokube et al. synthesized another channel comprising of resorcinol based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to forma dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].

“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chain as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].

“A covalently bonded macrotetracycle4 (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].

“Inorganic derivative using crown ethers have also been synthesized. Hall et. al synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45]

B STAVES:

“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D- and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].

“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel β-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].

“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].

“By attaching peptides to the octiphenyl scaffold, a β-barrel can be formed via self-assembly through the formation of β-sheet structures between the peptide chains (Figure 1.13)” [53].

“The same scaffold was used by Matile etal. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].

“Attaching the electron-poor naphthalenediimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the compleentary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].

MICELLAR

“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].

“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). (Figure 1.15) shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].

ANION CONDUCTING CHANNELS:

“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al (Figure 1.16). Oligoether chains were attached to the primary face of the β-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I-> Br-> Cl- (following Hofmeister series)” [61].

“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-“ [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].

Cellular Prosthesis: Inklings of a New Interdisciplinary Approach

The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.

However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally-Restorative Medicine is possible through taking the technologies and techniques involved in in constructing, integrating, and experimentally-verifying either a.) non-biological analogues of ion-channels & ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and b.) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neo- and prefrontal-cortex, and through iterative procedures of gradual replacement thereby achieving indefinite-longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e. design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in-vivo.

The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e. the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic and/or molecular physically-embodied) and functional emulation of biological cells.

The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the field of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in field of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e. sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps and sections of bilipid membrane) directly.

Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially-synthesized-but-chemically/structurally-equivalent biological analogues.

This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally-Restorative Medicine. I suggest the designation ‘cellular prosthesis’.

References:

[1]     Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.

[2]    Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.

[3]    Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405–6435. ISSN 0040–4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:

[4]      XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[5]      Ibid., p. 6411.

[6]      Ibid., p. 6416.

[7]      Ibid., p. 6413.

[8]      Ibid., p. 6412.

[9]      Ibid., p. 6414.

[10]    Ibid., p. 6425.

[11]    Ibid., p. 6427.

[12]    Ibid., p. 6416.

[13]    Ibid., p. 6419.

[14]    Ibid., p. 6419.

[15]    Ibid., p. 6419.

[16]    Ibid., p. 6419.

[17]    Ibid., p. 6419.

[18]    Ibid., p. 6421.

[19]    Ibid., p. 6422.

[20]    Ibid., p. 6422.

[21]    Ibid., p. 6422.

[22]    Ibid., p. 6422.

[23]    Ibid., p. 6423.

[24]    Ibid., p. 6423.

[25]    Ibid., p. 6423.

[26]    Ibid., p. 6426.

[27]    Ibid., p. 6426.

[28]    Ibid., p. 6427.

[29]    Ibid., p. 6327.

[30]    Ibid., p. 6427.

[31]    XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[32]    Ibid., p. 4.

[33]    Ibid., p. 4.

[34]    Ibid., p. 4.

[35]    Ibid., p. 4.

[36]    Ibid., p. 7.

[37]    Ibid., p. 8.

[38]    Ibid., p. 7.

[39]    Ibid., p. 7.

[40]    Ibid., p. 7.

[41]    Ibid., p. 7.

[42]    Ibid., p. 7.

[43]    Ibid., p. 8.

[44]    Ibid., p. 8.

[45]    Ibid., p. 9.

[46]    Ibid., p. 9.

[47]    Ibid., p. 9.

[48]    Ibid., p. 10.

[49]    Ibid., p. 10.

[50]    Ibid., p. 10.

[51]    Ibid., p. 10.

[52]    Ibid., p. 11.

[53]    Ibid., p. 12.

[54]    Ibid., p. 12.

[55]    Ibid., p. 12.

[56]    Ibid., p. 12.

[57]    Ibid., p. 12.

[58]    Ibid., p. 13.

[59]    Ibid., p. 13.

[60]    Ibid., p. 14.

[61]    Ibid., p. 14.

[62]    Ibid., p. 14.

[63]    Ibid., p. 15.

[64]    Ibid., p. 15.

[65]    Ibid., p. 15.

[66]    Cortese, F., (2013). Concepts for Functional Replication of Biological Neurons. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/conce…plication/

[67]    Cortese, F., (2013). Gradual Neuron Replacement for the Preservation of Subjective-Continuity. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/gradu…placement/

[68]    Cortese, F., (2013). Wireless Synapses, Artificial Plasticity, and Neuromodulation. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/wireless-synapses/

[69]    Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339

]]>
https://lifeboat.com/blog/2013/06/intimations-of-imitations-a-roadmap-to-cellular-prosthesis-on-the-horizon/feed 1
The Hubris of Neo-Luddism https://lifeboat.com/blog/2013/06/the-hubris-of-neo-luddism https://lifeboat.com/blog/2013/06/the-hubris-of-neo-luddism#comments Sun, 02 Jun 2013 08:01:54 +0000 http://lifeboat.com/blog/?p=8117 This essay was originally published by the Institute for Ethics & Emerging Technologies

One of the most common anti-Transhumanist tropes one finds recurring throughout Transhumanist rhetoric is our supposedly rampant hubris. Hubris is an ancient Greek concept meaning excess of pride that carries connotations of reckless vanity and heedless self-absorbment, often to the point of carelessly endangering the welfare of others in the process. It paints us in a selfish and dangerous light, as though we were striving for the technological betterment of ourselves alone and the improvement of the human condition solely as it pertains to ourselves, so as to be enhanced relative to the majority of humanity.

In no way is this correct or even salient. I, and the majority of Transhumanists, Techno-Progressives and emerging-tech-enthusiasts I would claim, work toward promoting beneficial outcomes and deliberating the repercussions and most desirable embodiments of radically-transformative technologies for the betterment of all mankind first and foremost, and only secondly for ourselves if at all.

The ired irony of this situation is that the very group who most often hails the charge of Hubris against the Transhumanist community is, according to the logic of hubris, more hubristic than those they rail their charge against. Bio-Luddites, and more generally Neo-Luddites, can be clearly seen to be more self-absorbed and recklessly-selfish than the Transhumanists they are so quick to raise qualms against.

The logic of this conclusion is simple: Transhumanists seek merely to better determine the controlling circumstances and determining conditions of our own selves, whereas Neo-Luddites seek to determine such circumstances and conditions (even if using a negative definition, i.e., the absence of something) not only for everyone besides themselves alive at the moment, but even for the unquantable multitudes of minds and lives still fetal in the future.

We do not seek to radically transform Humanity against their will; indeed, this is so off the mark as to be antithetical to the true Transhumanist impetus — for we seek to liberate their wills, not leash or lash them. We seek to offer every human alive the possibility of transforming themselves more effectively according to their own subjective projected objectives; of actualizing and realizing themselves; ultimately of determining themselves for themselves. We seek to offer every member of Humanity the choice to better choose and the option for more optimal options: the self not as final-subject but as project-at-last.

Neo-Luddites, on the other hand, wish to deny the whole of humanity that choice. They actively seek the determent, relinquishment or prohibition of technological self-transformation, and believe in the heat of their idiot-certainty that they have either the intelligence or the right to force their own preference upon everyone else, present and future. Such lumbering, oafish paternalism patronizes the very essence of Man, whose only right is to write his own and whose only will is to will his own – or at least to vow that he will will his own one fateful yet fate-free day.

We seek solely to choose ourselves, and to give everyone alive and yet-to-live the same opportunity: of choice. Neo-Luddites seek not only to choose for themselves but to force this choice upon everyone else as well.

If any of the original Luddites were alive today, perhaps they would loom large to denounce the contemporary caricature of their own movement and rail their tightly-spooled rage against the modern Neo-Luddites that use Ludd’s name in so reckless a threadbare fashion. At the heart of it they were trying to free their working-class fellowship. There would not have been any predominant connotations of extending the distinguishing features of the Luddite revolt into the entire future, no hint of the possibility that they would set a precedent which would effectively forestall or encumber the continuing advancement of technology at the cost of the continuing betterment of humanity.

Who were they to intimate that continuing technological and methodological growth and progress would continually liberate humanity in fits and bounds of expanding freedom to open up the parameters of their possible actions — would free choice from chance and make the general conditions of being continually better and better? If this sentiment were predominant during 1811–1817, perhaps they would have lain their hammers down. They were seeking the liberation of their people after all; if they knew that their own actions might spawn a future movement seeking to dampen and deter the continual technological liberation of Mankind, perhaps they would have remarked that such future Neo-Luddites missed their point completely.

Perhaps the salient heart of their efforts was not the relinquishment of technology but rather the liberation of their fellow man. Perhaps they would have remarked that while in this particular case technological relinquishment coincided with the liberation of their fellow man, that this shouldn’t be heralded as a hard rule. Perhaps they would have been ashamed of the way in which their name was to be used as the nametag and figurehead for the contemporary fight against liberty and Man’s autonomy. Perhaps Ludd is spinning like a loom in his grave right now.

Does the original Luddites’ enthusiasm for choice and the liberation of his fellow man supersede his revolt against technology? I think it does. The historical continuum of which Transhumanism is but the contemporary leading-tip encompasses not only the technological betterment of self and society but the non-technological as well. Historical Utopian ventures and visions are valid antecedents of the Transhumanist impetus just as Techno-Utopian historical antecedents are. While the emphasis on technology predominant in Transhumanist rhetoric isn’t exactly misplaced (simply because technology is our best means of affecting and changing self and society, whorl and world, and thus our best means of improving it according to subjective projected objectives as well) it isn’t a necessary precondition, and its predominance does not preclude the inclusion of non-technological attempts to improve the human condition as well.

The dichotomy between knowledge and device, between technology and methodology, doesn’t have a stable ontological ground in the first place. What is technology but embodied methodology, and methodology but internalized technology? Language is just as unnatural as quantum computers in geological scales of time. To make technology a necessary prerequisite is to miss the end for the means and the mark for a lark. The point is that we are trying to consciously improve the state of self, society and world; technology has simply superseded methodology as the most optimal means of accomplishing that, and now constitutes our best means of effecting our affectation.

The original Luddite movement was less against advancing technology and more about the particular repercussions that specific advancements in technology (i.e. semi-automated looms) had on their lives and circumstances. To claim that Neo-Luddism has any real continuity-of-impetus with the original Luddite movement that occurred throughout 1811–1817 may actually be antithetical to the real motivation underlying the original Luddite movement – namely the liberation of the working class. Indeed, Neo-Luddism itself, as a movement, may be antithetical to the real impetus of the initial Luddite movement both for the fact that they are trying to impose their ideological beliefs upon others (i.e. prohibition is necessarily exclusive, whereas availability of the option to use a given technology is non-exclusive and forces a decision on no one) and because they are trying to prohibit the best mediator of Man’s ever-increasing self-liberation – namely technological growth.

Support for these claims can be found in the secondary literature. For instance, in Luddites and Luddism Kevin Binfield sees the Luddite movement as an expression of worker-class discontent during the Napoleonic Wars than having rather than as an expression of antipathy toward technology in general or toward advancing technology as general trend (Binfield, 2004).

And in terms of base-premises, it is not as though Luddites are categorically against technology in general; rather they are simply against either a specific technology, a specific embodiment of a general class of technology, or a specific degree of technological sophistication. After all, most every Luddite alive wears clothes, takes antibiotics, and uses telephones. Legendary Ludd himself still wanted the return of his manual looms, a technology, when he struck his first blow. I know many Transhumanists and Technoprogressives who still label themselves as such despite being weary of the increasing trend of automation.

This was the Luddites’ own concern: that automation would displace manual work in their industry and thereby severely limit their possible choices and freedoms, such as having enough discretionary income to purchase necessities. If their government were handing out guaranteed basic income garnered from taxes to corporations based on the degree with which they replace previously-manual labor with automated labor, I’m sure they would have happily lain their hammers down and laughed all the way home. Even the Amish only prohibit specific levels of technological sophistication, rather than all of technology in general.

In other words no one is against technology in general, only particular technological embodiments, particular classes of technology or particular gradations of technological sophistication. If you’d like to contest me on this, try communicating your rebuttal without using the advanced technology of cerebral semiotics (i.e. language).

References.

Binfield, K. (2004). Luddites and Luddism. Baltimore and London: The Johns Hopkins University Press.

]]>
https://lifeboat.com/blog/2013/06/the-hubris-of-neo-luddism/feed 1
Longevity’s Bottleneck May Be Funding, But Funding’s Bottleneck is Advocacy & Activism https://lifeboat.com/blog/2013/06/longevitys-bottleneck-may-be-funding-but-fundings-bottleneck-is-advocacy-activism Sat, 01 Jun 2013 08:01:46 +0000 http://lifeboat.com/blog/?p=8107

The following article was originally published by Immortal Life

When asked what the biggest bottleneck for Radical or Indefinite Longevity is, most thinkers say funding. Some say the biggest bottleneck is breakthroughs and others say it’s our way of approaching the problem (i.e. that we’re seeking healthy life extension whereas we should be seeking more comprehensive methods of indefinite life-extension), but the majority seem to feel that what is really needed is adequate funding to plug away at developing and experimentally-verifying the various, sometimes mutually-exclusive technologies and methodologies that have already been proposed. I claim that Radical Longevity’s biggest bottleneck is not funding, but advocacy.

This is because the final objective of increased funding for Radical Longevity and Life Extension research can be more effectively and efficiently achieved through public advocacy for Radical Life Extension than it can by direct funding or direct research, per unit of time or effort. Research and development obviously still need to be done, but an increase in researchers needs an increase in funding, and an increase in funding needs an increase in the public perception of RLE’s feasibility and desirability.

There is no definitive timespan that it will take to achieve indefinitely-extended life. How long it takes to achieve Radical Longevity is determined by how hard we work at it and how much effort we put into it. More effort means that it will be achieved sooner. And by and large, an increase in effort can be best achieved by an increase in funding, and an increase in funding can be best achieved by an increase in public advocacy. You will likely accelerate the development of Indefinitely-Extended Life, per unit of time or effort, by advocating the desirability, ethicacy and technical feasibility of longer life than you will by doing direct research, or by working towards the objective of directly contributing funds to RLE projects and research initiatives.

In order to get funding we need to demonstrate with explicit clarity just how much we want it, and that we can do so while minimizing potentially negative societal repercussions like overpopulation. We must do our best to vehemently invalidate the Deathist clichés that promulgate the sentiment that Life-Extension is dangerous or unethical. It needn’t be either, and nor is it necessarily likely to be either.

Some think that spending one’s time deliberating the potential issues that could result from greatly increased lifespans and the ways in which we could mitigate or negate them won’t make a difference until greatly increased lifespans are actually achieved. I disagree. While any potentially negative repercussions of RLE (like overpopulation) aren’t going to happen until RLE is achieved, offering solution paradigms and ways in which we could negate or mitigate such negative repercussions decreases the time we have to wait for it by increasing the degree with which the wider public feels it to be desirable, and that it can very well be done safely and ethically.

Those who are against radical life extension are against it either because they think it is infeasible (in which case being “against” it may be too strong a descriptor) or because they have qualms relating to its ethicacy or its safety. More people openly advocating against it means a higher public perception of its undesirability. Whether RLE is eventually achieved via private industry or via government subsidized research initiatives, we need to create the public perception that it is widely desired before either government or industry will take notice.

The sentiment that that the best thing we can do is simply live healthily and wait until progress is made seems to be fairly common as well. People have the feeling that researchers are working on it, that it will happen if it can happen, and that waiting until progress is made is the best course to take. Such lethargy will not help Radical Longevity in any way. How long we have to wait for RLE is a function of how much effort we put into it. And in this article I argue that how much funding and attention RLE receives is by and large a function of how widespread the public perception of its feasibility and desirability is.

This isn’t simply about our individual desire to live longer. It might be easier to hold the sentiment that we should just wait it out until it happens if we only consider its impact on the scale of our own individual lives. Such a sentiment may also be aided by the view that greatly longer lives would be a mere advantage, nice but unnecessary. I don’t think this is the case. I argue that the technological eradication of involuntary death is a moral imperative if there ever was one. If how long we have to wait until RLE is achieved depends on how vehemently we demand it and on how hard we work to create the public perception that longer life is widely longed-for, then to what extent is the 100,000 lives lost potentially needlessly every day while we wait on our hands?

One million people will die a wasteful and involuntary death in the next 10 days. One million real lives. This puts the Deathist charges of inethicacy in a ghostly new light. If advocating the desirability, feasibility and radical ethicacy of RLE can hasten its implementation by even a mere 10 days, then one million lives that would have otherwise been lost will have been saved by the efforts of RLE advocates, researchers and fiscal supporters. Seen in this way, working toward RLE may very well be the most ethical and selfless way you could spend your time, in terms of the number of lives saved and/or the amount of suffering prevented.

One of the most common and easy-to-raise concerns I come across in response to any effort to minimize the suffering of future beings is that there are enough problems to worry about right now. “Shouldn’t we be worrying about lessening starvation in underdeveloped countries first? They’re starving right now. Shouldn’t we be focusing on the problems of today? On things that we can have a direct impact on? ”. Indeed.

100,000 people will die, potentially needlessly, tomorrow. The massive number of people that suffer involuntary death is a problem of today! Indeed, it may very well be the most pressing problem of today! What other source of contemporary suffering claims so many lives, and occurs on such a massive scale? What other “problem of today” is responsible for the needless and irreversible involuntary death of one hundred thousand lives per day? Certainly not starvation, or war, or cancer, all of which in themselves represent smaller sources of involuntary death. RLE advocates do what they do for the same reason that people who try to mitigate starvation, war, and cancer do what they do, to lessen the amount of involuntary death that occurs.

This is a contemporary problem that we can have a direct impact on. People intuitively assume that we won’t achieve indefinitely-extended life until far in the future. This makes them conflate any lives saved by indefinitely-extended-lifespans with lives yet to come into existence. This makes them see involuntary death as a problem of the future, rather than a problem of today. But more people than I’ve ever known will die tomorrow, from causes that are physically possible to obviate and ameliorate – indeed, from causes that we have potential and conceptual solutions for today.

I have attempted to show in this article that advocating RLE should be considered as “working toward it” to as great an extent as directly funding it or performing direct research on it is considered as “working toward it”. Advocacy has greater potential to increase its widespread desirability than direct work or funding does, and increasing both its desirability and the public perception if its desirability has more potential to generate increased funding and research-attention for RLE than direct funding or research does. Advocacy thus has the potential to contribute to the arrival of RLE and hasten its implementation just as much, if not moreso (as I have attempted to argue in this article), than practical research or direct funding does.

This should motivate people to help create the momentous momentum we need to really get the ball rolling. To be an RLE advocate is to be an RLE worker. Involuntary death from age-associated, physically-remediable causes is the largest source of death, destruction and suffering today.  Don’t you want to help prevent the most widespread source of death and of suffering in existence today?  Don’t you want to help mitigate the most pressing moral concern not only of today, but of the entirety of human history – namely physically-remediable involuntary death?

Then advocate the technological eradication of involuntary death. Advocate the technical feasibility, extreme desirability and blatant ethicacy of indefinitely extending life. Death is a cataclysm. We need not sanctify the seemingly-inevitable any longer. We need not tell ourselves that death is somehow a good thing, or something we can do nothing about, in order to live with the “fact” of it any longer. Soon it won’t be fact of life. Soon it will be artifact of history. Life may not be ipso-facto valuable according to all philosophies of value – but life is a necessary precondition for any sort of value whatsoever. Death is dumb, dummy! An incontrovertible waste convertible into nothing! A negative-sum blight! So if you want to contribute to the problems of today, if you want to help your fellow man today, then stand proud and shout loud “Doom to Arbitrary Duty and Death to  Arbitrary Death!” at every crowd cowed by the seeming necessity of death.

]]>
How Could WBE+AGI be Easier than AGI Alone? https://lifeboat.com/blog/2013/05/how-could-wbeagi-be-easier-than-agi-alone https://lifeboat.com/blog/2013/05/how-could-wbeagi-be-easier-than-agi-alone#comments Fri, 31 May 2013 07:01:56 +0000 http://lifeboat.com/blog/?p=8134 This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales;  simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e.  methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.

Conclusion:

Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5

References:

[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: http://www.kurzweilai.net/runaway-artificial-intelligence [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: https://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3…report.pdf [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.

]]>
https://lifeboat.com/blog/2013/05/how-could-wbeagi-be-easier-than-agi-alone/feed 4
“Radical Life Extension: Are You Ready to Live 1,000 Years?” — Public Speaking Event in Washington D.C. 09/13 https://lifeboat.com/blog/2013/05/immortal-life-is-hosting Sat, 25 May 2013 07:01:44 +0000 http://lifeboat.com/blog/?p=8053 1inftri

Immortal Life is presenting a public event in Washington D.C., titled “Radical Life Extension: are you ready to live 1,000 years?”

It will take place in the historic Friends Meeting House on September 22 (Sunday). The event will be from 5:30–7:30.

We will have 10–12 speakers discussing Immortality / Life Extension from a wide variety of perspectives: scientific, political, social, poetic, religious, atheistic, economic, demographic, moral, etc.

Our lineup of speakers even features numerous Lifeboat Foundation Advisors , including myself, Gabriel Rothblatt, Hank Pellissier, Antonei B. Csoka, and Didier Coeurnelle. Other speakers include Mark Waser, Gray Scott, Jennifer ‘Dotora’ Huse, Apneet Jolly, Tom Mooney, Hank Fox, Maitreya One Paul Spiegel and Rich Lee.

There are still a few available slots for speakers but they’re filling fast, so if you’re interested in reserving a seat, being a speaking or contributing in any other way, please have them send an email to [email protected]

We have already attained the support of numerous life extension groups in the Washington DC area, but are always grateful for funding contributions from sponsors to help us with travel and advertising costs, and speaker stipends, for this event and future events. Promoters are always welcome as well.

]]>
Electronic Music as Emerging Tech. Embryo of Art’s Entire Future — Part One https://lifeboat.com/blog/2013/05/7927 Sun, 19 May 2013 20:20:40 +0000 http://lifeboat.com/blog/?p=7927 Artifacts, Artifictions, Artifutures 0.5

It’s not a physical landscape. It’s a term reserved for the new technologies. It’s a landscape in the future. It’s as though you used technology to take you off the ground and go like Alice through the looking glass.
John Cage, in reference to his 1939 Imagined Landscape [1].

In the last installment (see here, here and here) I argued that the increasing prominence and frequency of futuristic aesthetics and themes of empowerment-through-technology in EDM-based mainstream music videos, as well as the increasing predominance of EDM foundations in mainstream music over the past 3 years, helps promote general awareness of emerging-technology-grounded and NBIC-driven concepts, causes and potential-crises while simultaneously presenting a sexy and self-empowering vision of technology and the future to mainstream audiences. The only reason this is mentionable in the first place is the fact that these are mainstream artists and labels reaching very large audiences.

In this installment, I will be analyzing a number of music videos for tracks by “real EDM” artists, released by exclusively-EDM record labels, to show that these futuristic themes aren’t just a consequence of EDM’s adoption by mainstream music over the past few years, and that there is long history of futuristic aesthetics and gestalts in electronic music, as well as recurrent themes of self-empowerment through technology.

In this part I will discuss some of these recurrent themes, which can be seen to derive from a number of aspects shared by Virtual Art (any art created without the use of physical instruments), of which contemporary electronic music is an example because it is created using software. I argue that this will become the predominant means of art production — via software — for all artistic mediums, from auditory to visual to eventual olfactory, somatosensory and proprioceptual artistic mediums. The interface between artist and art will become progressively thinner and more transparent, culminating in a time where Brain-Computer-Interface technology can sense neural operation and translate this directly into an informational form to be played by physical systems (e.g. speakers) at first, but eventually into a form that can be read by given person’s own BCI instantiated phenomenologically via high-precision technological neuromodulation (of which deep brain stimulation is an early form).

In the second part of this installment I will be following this discussion up with a look at some music videos for EDM-tracks that embody and exemplify the themes, aesthetics and general gestalts under consideration here.

Odditory Artificiality

The music- videos accompanying many historical and contemporary examples of EDM tracks display consistently futuristic and technoprogressive thematics, aesthetics and plots, as well as positive, self-empowering and often primal-pleasure-appealing depictions of emerging and as-yet-conceptual technologies. Many also exemplify the recurrent theme of human-technology symbiosis, inter-constitution and co-deferent inter-determination. It is not just physical prosthesis – for in a way language is as much prosthetic technology as artificial arm.  This definition of prosthesis doesn’t make a distinction between nonbiological systems for the restoration of statistically-normal function and nonbiological systems for the facilitation or instantiation of enhanced functions and/or categorically-new functional modalities. And nor should it. I argue that such a dichotomy is invalid because our functional modalities are always changing. This was true of biological evolution and it is true of mind and of cultural evolution as well. Other recurrent themes depicted in the video include technological autonomy and animacy and the facilitation of seemingly magical or otherwise-impossible feats, either via technology or else against a futuristic background.

These videos are not wrong for picking up on the self-empowering and potential-liberating inherencies of technology, nor their radically-transformative and ability-extending potentials. Indeed, as I argued in brief in the first installment of this series, electronic music exemplifies a general trend and methodology that will become standard for more and more artistic mediums, and to an increasingly large degree in each medium, as we move forward into the future. Contemporary EDM and electronic music is made using software – and this fundamental dissociation with physical instrumentation demonstrates the liberating potentials of what I have called virtuality – the realm of information, the ontics of semiotics, and the ability to readily create, modulate and modify a given informational object to an arbitrarily-precise degree. Not only do artists have the ability to modulate and modify a given sound-wave or sound-wave-ensemble with greater magnitude and precision, but they can do so to create end-result sound-waves that are either impossible with current physical instruments or else significantly harder to produce with physical instruments.

Virtuality De-Scarcitizes

The ability to create without constraint (i.e. if it’s an information-product then we aren’t constrained by the use of physical resources or dependency on materials-processing and system-configuration/component-integration) means that our only limiting factor is available or objective-optimal memory and computation. The ability to readily duplicate an information-product with negligible resource-expenditure (e.g. it doesn’t cost much, in terms of memory or computation, to create and transmit an electronic file) means that any resources expended in the creation (whether computationally or manually by a human programmer) or maintenance (e.g. storage) of the information-product is amortized over the course of all the instances in which it is doubled – that is, it’s cost, or the amount of resources expended, in comparison to the net product is cut in half every time it’s doubled).

Is it coincidence that these de-scarcitizing and constraint-eschewing properties inherent in information-products are paralleled and reflected so perfectly, in thematic, aesthetic and gestalt, by electronic-music videos? Or could such potentials be felt by our raw intuitions, seen in the ways in which technology empowers people, expands their choices, frees possibilities and works once-wonders on a daily basis, and simply amplified through the cultural magnifying-glass of art? After all, if one looks back throughout the history of electronic music one can see many early pioneers and antecedents of electronic music, we can see individuals and movements that acknowledge these de-scarcitizing, possibility-actualizing and self-empowering potentials in various ways. This very virtue of virtuality could be seen, exemplified in embryonic form, in early forms of electronic music as long as 100+ years ago — for instance in the works and manifestos of Italian Futurism, an early 20th century art movement, which embraced (among other artistic sub-genres) Noise Music, an early20th century embodiment of electronic music


It’s not as though EDM came out of nowhere after all (claims to constraintless creation aside); the technological synthesis of sound can be seen as a natural continuation of the trends set out by the creation and development of recording equipment  in the early to mid-20th century, and harkened by the explosion of popularity the electric guitar and synthesizers saw in the 1960s. In an interview with Jim Morrison given in 1969 essentially predicts the predominance of electronic music we are seeing today, saying that “I guess in four or five years the new generation’s music will have a synthesis of those two elements [blues and folk] and some third thing, maybe it will be entirely, um, it might rely heavily on electronics, tapes… I can kind of envision one person with a lot of machines, tapes, electronic setups singing or speaking using machines.”

Sound-Wave Sculptor

I believe that the use of noise to make music will continue and increase until we reach a music produced through the use of electrical instruments which will make available for musical purposes any and all sounds that can be heard. Photoelectric, film and mechanical mediums for the synthetic production of music will be explored.
John Cage, The Future of Music: Credo, 1937 [2].

When did these underlying potentialities inherent in virtual or informational-mediation really start to become obvious, or at least detectable in nascent or fledging form?

The de-scarcitizing effects of virtually-mediated art (a class that includes such early embodiments and antecedents of electronic music) seems only to have become obvious on a level beyond intuition when the ability to artificially synthesize sound brought with it a greatly increased ability to directly modulate and modify such sound.

This marked the beginning of the trend that distinguishes this class as categorically different than physically-mediated art. After all, playing an instrument can be considered modulating it just as operating a turn table can, so what constitutes the effective difference? Namely the greatly increased increased range and precision (that is, the precision with which the artist can modulate a given sound or create a given sound to his liking, which corresponds to the degree-of-accuracy between his mental ideal and what he can produce physicality) of modulation made possible by the technologies and techniques that allows us to artificially-synthesize sound in the first place.

Sound-waves can be modulated (i.e. controlled or affected in real-time) or modified (i.e. recorded, controlled or affected in iterations or gradually, and then replayed without modulation in real-time) with greater precision (e.g. ability to modulate a waveform within smaller intervals of time or with a smaller standard-deviation/tolerance-interval/margin-of-error). The magnitude of such changes (e.g. the range of frequencies a given waveform can be made to conform to, or the range of pitches a given waveform can be made to embody, through such methods) is also greater than the potential magnitude available via the modulation of playing a physical instrument. What’s more, fundamentally new categories of sound can be produced as well, whereas in non-virtually-mediated-music such fundamentally new categories of sound would require a whole new physical instrument — if they can be reproduced by physical instrumentation at all.

The earliest synthesizers harkened the future of all art mediums; artificially-created, modulated and modified sound via the user-interface of knobs, dials and keys is one small step away from music produced solely through software – and one giant leap beyond the watered-down and matter-bound paradigm of music and artistic-media in general that preceded it.

References:

[1] Kostelanetz, Richard. 1986. “John Cage and Richard Kostelanetz: A Conversation about Radio”. The Musical Quarterly.72 (2): 216–227.

[2] Cage, John. 1939. “Future of Music; Credo”.

]]>
Does Advanced Technology Make the 2nd Amendment Redundant? https://lifeboat.com/blog/2013/04/does-advanced-technology-make-the-2nd-amendment-redundant https://lifeboat.com/blog/2013/04/does-advanced-technology-make-the-2nd-amendment-redundant#comments Thu, 18 Apr 2013 16:06:19 +0000 http://lifeboat.com/blog/?p=7033
This article was originally published by Transhumanity

The 2nd amendment of the American Constitution gives U.S citizens the constitutional right to bear arms. Perhaps the most prominent justification given for the 2nd amendment is as a defense against tyrannical government, where citizens have a method of defending themselves against a corrupt government, and of taking their government back by force if needed by forming a citizen militia. While other reasons are sometimes called upon, such as regular old individual self-defense and the ability for the citizenry to act as a citizen army in the event their government goes to war despite being undertrooped, these justifications are much less prominent than the defense-against-tyrannical-government argument is.

This may have been fine when the Amendment was first conceived, but considering the changing context of culture and its artifacts, might it be time to amend it? When it was adopted in 1751, the defensive-power afforded to the citizenry by owning guns was roughly on par with the defensive-power available to government. In 1751 the most popular weapon was the musket, which was limited to 4 shots per minute, and had to be re-loaded manually. The state-of-the-art for “arms” in 1791 was roughly equal for both citizenry and military. This was before automatic weapons – never mind tanks, GPS, unmanned drones, and the like. In 1791, the only thing that distinguished the defensive or offensive capability of military from citizenry was quantity. Now it’s quality.

Technological growth has made the 2nd amendment redundant. If one agrees that its purpose was to give citizenry the ability to physically defend themselves against a tyrannical government, then we must admit that the inequality of defensive capability created by the advanced state of arms and weaponry available to military, and not available to the citizenry, has made the 2nd amendment redundant by virtue of the fact that the types of weapons available to citizens no longer compare in defensive and offensive capability to the kinds of weapons available to the military. Law lags behind technology; what else is new(s)?

This claim would have been largely true as early as WWI, which saw the adoption of tanks, air warfare, naval warfare, poison gas and automatic weapons – assets which weren’t available to the average citizen. Military technology has only progressed since then. Indeed, the wedding of military assets with industrialization and mass-manufacturing that occurred during WWI may have entrenched this trend so deeply that we had no hope of ameliorating such technological disparity thereafter. This marked the beginning of the military industrial complex, which today assures that the overwhelming majority of new technological advances are able to be leveraged by the military before they trickle down to the average citizen through industry.

None of this will be a problem if advances in technologies-of–post-scarcity (e.g nanotech, fab-labs) progress to the point where all cost becomes attributable to the information in the design of a given product. The average citizen currently doesn’t have access to the types of manufacturing and processing assets needed to create advanced weaponry; such assets are only available to the military, via the military-industrial complex. But if veritable means of post-scarcity came into the picture, then the only hope military would have of keeping proprietary access to certain technologies (that is, of making certain technologies illegal to use and own if you’re an average citizen) would be to keep the designs of such weapons confidential – a possibility in turn undermined by the trend of increasing transparency, which some think will culminate in full-on sousveillance – in which case confidentiality is out of the question.

So the broader trend of increasing-power-in-fewer-hands, seen vividly in the increasing scale of destruction throughout the history of war, may level things out by itself (whether singly or in tandem with increasing transparency). I’m sure that when the first Atomic Bomb was dropped, very few people thought that so much destruction could have been unleashed by one bomb. Now we take for granted the fact that such things are possible. If the trend continues and the constructive and destructive capabilities available to an individual through the use of technology keeps on climbing, this dichotomy (of inequality of offensive/defensive capability between citizen and military) may be flattened out on its own, and may turn out to be but a bump in the road.

Conclusion, confusion, contusion:

So, should we give the 2nd amendment a final shot to the head on the grounds that its most called-upon utility has been obviated by technological growth– or should we level the laying-field from the opposite direction, and give every man, woman and child access to the latest in cutting-edge weapons-of-mass-destruction?

Probably neither. The transformative potential of technology makes such 2-tone options seem pale and inadequate. Perhaps the real message is this: that technologies can disrupt and rupture what seem to be quiet raptures weighty with wait and at rest, that futures often refute and that the past is quick to become the post – that technologies transform, and that we must be on constant guard against our precast foundations and preconceptions, which can turn at any moment with a little technological momentum underfoot. While they may have made sense at one point, sensibility was made to be remade. Culture is a seismic landscape, and what we take for Law, whether physical or Man-Made, always remains terribly (and thankfully) uncertain in the face of technologies’ upward growth.

We must always remain open to facing the New, and to remaking our selves and our world in response thereto, even if on the face of it the victory of our change seems like our defeat. Technology changes the circumstances, and we cannot rely on tradition and unflinching Law to provide the answer. We must always be ready to lift the veil and have another look at the available options when new technologies come into play, and always remain willing to will our own better way. Certainty is a fool’s crown, and one that the bastard-prince Newness will be fast to dash to the ground.

]]>
https://lifeboat.com/blog/2013/04/does-advanced-technology-make-the-2nd-amendment-redundant/feed 7
Killing Deathist Cliches: “Death Gives Meaning to Life” is Meaningless! https://lifeboat.com/blog/2013/04/killing-deathist-cliches-death-gives-meaning-to-life-is-meaningless https://lifeboat.com/blog/2013/04/killing-deathist-cliches-death-gives-meaning-to-life-is-meaningless#comments Fri, 12 Apr 2013 11:41:19 +0000 http://lifeboat.com/blog/?p=7005 Le Petit Trépas

One common argument against Radical Life Extension is that a definitive limit to one’s life – that is, death – provides some essential baseline reference, and that it is only in contrast to this limiting factor that life has any meaning at all. In this article I refute the argument’s underlying premises, and then argue that even if such premises were taken as true, its conclusion – that eradicating death would negate the “limiting factor” that legitimizes life — is also invalid.

Death gives meaning to life? No! Death makes life meaningless!

One version of the argument, which I’ve come across in a variety of places, is given in Brian Cooney’s Posthuman, an introductory philosophical text that uses various futurist scenarios and concepts to illustrate the broad currents of Western Philosophy. Towards the end he makes his argument against immortality, claiming that if we had all the time in the universe to do what we wanted, then we wouldn’t do anything at all. Essentially, his argument boils down to ‘if there is no possibility of not being able to do something in the future, then why would we ever do it?”.

This assumes that we make actions on the basis of not being able to do them again. But people don’t make decisions this way. We didn’t go out to dinner because the restaurant was closing down… we went out for dinner because we wanted to go out for dinner… I think that Cooney’s version of the argument is naïve. We don’t make the majority of our decisions by contrasting an action to the possibility of not being able to do it in future.

His argument seems to be that if there were infinite time then we would have no way of prioritizing our actions. If we had a list of all possible actions set before us, and time were limitless, we might (according to his logic) accomplish all the small, negligible things first, because they’re easier and all the hard things can wait. If we had all the time in the world, we would have no reference point with which to judge how important a given action or objective is, which ones it is most important to get done, and which ones should get done in the place of other possibilities. If we really can do every single thing on that listless list, then why bother, if each is as important as every other? In his line-of-reasoning, importance requires scarcity. If we can do everything it were possible to do, then there is nothing that determines one thing as being more important than another. A useful analogy might be that current economic definitions of value require scarcity. If everything were as abundant as everything else, if nothing were scarce, then we would have no way of ascribing economic value to a given thing, such that one thing has more economic value than another. What we sometimes forget is that ecologies aren’t always like economies.

Seethe of Sooth and Teethe of Truth

Where could this strange notion have come from? That death would give meaning to life… Is it our intuitions, having picked up on the fact that we usually draw conclusions and make final and definitive interpretations when a given thing is finished (e.g. we wait until the book is done before we decide whether it was good or bad)? Is it because we feel that lives, much like stories, need a definitive end to be true, and that something must be true to matter?

Could this (at least partly) come from the long philo-socio-historical tradition of associating truth and meaning with staticity and non-change? It makes seeming sense that we would rather truth not be squirming around on our hand while we’re looming for a better view. If truth is stillborn and stable, then we can make pronouncements we feel won’t dissipate as soon as they’ve left the tongue. If truth is motionless, then we might just be able to get a handle on it. If it’s running about like a wild animal, then any attempt to make or to discover truth might be murdered remorselessly by truth’s newest transformation. Corpses are easier to keep canned in ken than runny kids, after all.

If something can go on towards infinity then there is no time that it will stop moving, no time it will come definitively to rest and say ‘I am this.’ If we don’t have an end to curtail the reverse-comet-tail of our foreward rail, no last-exit exhale, never to come to rest so as to rest in one definitive piece, then we won’t ever be static enough to fit this vile definition of truth-as-staticity. This rank association of truth with being-at-rest has infected our very language: thus to go in a straight line without wavering is to keep true.

So this memetic foray has yielded a possible line-of-conceptual-association. We must have an end to be still, we must be still to have truth, and we must have truth to matter at all. Perhaps. There is no telling without a look at the till, and unfortunately it’s been taken by the wind.

If truth is that which does in fact exist, if truth is existence, then they’ve committed a dire irony by grounding truth in ground instead of sky, in the ironed smock instead of wrinkled frock, and by locating truth-as-existence in stillbirth and death so ill as to be still as still can be. If truth is life and life is motion, then how can truth be motionless death? They forget that their hard iron ore once flowed molten-bright and ductile enough to be pushed by oar.

It also makes slick and seemly sense, on the sheen of the surface at least, that we’ve associated change with death and the negation of truth. What once was is no more — and change is the culprit. Disintegration, destruction, death and the rank rot of fetid flesh all use change as their conduit. What they’ve failed to see is that so too with life, which acts solely through change. They’ve mistaken upheaval for removal, forgetting that to be we heave by the second as we breathe unbeckoned. Death only seems to require change because it’s still life until the very end. Life is change, life is motion, and death, when finally finished, is just the opposite.

In any case, they are wrong. Life doesn’t need limitation to get its hard-sought legitimation. Life is its own baseline and reference point. Death is a negation of life, taking all and leaving but the forsaken debris strewn by your wakeup quake.

High-Digger’s Being is Time Timing Itself

Another version of the “limiting factor” argument comes from Martin Heidegger, in his massive philosophical work Being and Time.

In the section being-toward-death he claims, on one level, that Being must be a totality, and in order to be a totality (in the sense of absolute or not containing anything outside of itself) it must also be that which it is not. Being can only become what it is not through death and so in order for Being to become a totality (which he argues it must in order to achieve authenticity – which is the goal all along, after all) it must become what it is not — that is, death — for completion. This reinforces some interpretations made above in linking truth with completion and completion with staticity.

Another line of reasoning taken by Heidegger seems to reinforce the interpretation made by Cooney, which was probably influenced heavily by Heidegger’s concept of being-toward-death. The “fact” that we will one day die causes Being to reevaluate itself, realize that it is time and time is finite, and that its finitude requires it to take charge of its own life — to find authenticity. Finitude for Heidegger legitimizes our freedom. If we had all the time in the world to become authentic, then what’s the point? It can always be deferred. But if our time is finite then the choice of whether to achieve authenticity or not falls in our own hands. Since we must make choices on how to spend our time, failing and to become authentic by spending one’s time on actions that don’t help achieve authenticity becomes our fault.

To be philosophically scrupulous would involve dissecting Heidegger’s mammoth Being and Time, and that is beyond the scope of this essay. Anyone who thinks I’ve misinterpreted Heidegger, or who thinks that Heidegger’s concept of Being-Towards-Death warrants a fuller explication that what it’s been given here, is encouraged to comment.

Can Limitless Life still have a “Filling Stillness” and “Legitimizing Li’mit”?

Perhaps more importantly, even if their premises were correct (i.e. that the “change” of death adds some baseline limiting factor, causing you to do what you would have not if you had all the time in the world, and thereby constituting our main motivator for motion and metric for meaning) they are still wrong in the conclusion that indefinitely-extended life would destroy or jeopardize this “essential limitation”.

The crux of the “death-gives-meaning-to-life” argument is that life needs scarcity, finitude or some other factor restricting the possible choices that could be made, in order to find meaning. But final death need not be the sole candidate for such a restricting factor.

Self: Le Petite Mort

All changed, changed utterly… A terrible beauty is born. The self sways by the second. We are creatures of change, and in order to live we die by the moment. I am not the same as I once was, and may never be the same again. The choices we prefer and the decisions we are most likely to make go through massive upheaval.

The changing self could constitute this “scarcitizing” or limiting factor just as well as death could. We can be compelled to prioritize certain choices and actions over others because we might be compelled to choose differently in another year, month or day. We never know what we will become, and this is a blessing. Life itself can act as the limiting factor that, for some, legitimizes life.

Society: Le Petite Fin Du Monde

Society is ever on an s-curve swerve of consistent change as well. Culture is in constant upheaval, with new opportunity’s opening upward all the time. Thus the changing state of culture and humanity’s upheaved hump through time could act as this “limiting factor” just as well as death or the changing self could. What is available today may be gone tomorrow. We’ve missed our chance to see the Roman Empire at its highest point, to witness the first Moon landing, to pioneer a new idea now old. Opportunities appear and vanish all the time.

Indeed, these last two points – that the changing state of self and society, together or singly, could constitute such a limiting factor just as effectively as death could – serve to undermine another common argument against the desirability of limitless life (boredom) – thereby killing two inverted phoenixes with one stoning. Too often is this rather baseless claim bandied about as a reason to forestall RLE – that longer life will lead to increased boredom. That self and society are in a constant state of change means that boredom should become increasingly harder to maintain. We are on the verge of our umpteenth rebirth, and the modalities of being that are set to become available to us, as selves and as societies, will ensure that the only way to entertain the notion of increased boredom  will be to personally hard-wire it into ourselves.

Life gives meaning to life, dummy!

Death is nothing but misplaced waste, and I think it’s time to take out the trash, with haste. We don’t need death to make certain opportunities more pressing than others, or to allow us to assign higher priorities to one action than we do to another. The change underlying life’s self-overcoming will do just fine, thank you.

This article was originally published by Transhumanity

]]>
https://lifeboat.com/blog/2013/04/killing-deathist-cliches-death-gives-meaning-to-life-is-meaningless/feed 3