Toggle light / dark theme

Havens over Hell — Ecosystems of the Venusian Tropopause

In our on-going ambitions to colonise space — and our search for exo-planets in goldilocks zones, it is often overlooked that the most Earth-like area known to us is in our own Solar System, and very nearby — the upper reaches of the Venusian troposphere.

Whilst the surface of Venus invokes classical images of Hell — a dark sea of fire and brimstone, where temperatures raise to an incredible 450°C — hot enough to melt lead, tin and zinc, and pressurised to such an extent (92 bar) that in these conditions the atmosphere ghosts in and out of an ocean of supercritical carbon dioxide — sulphur dioxide tints the air, and sulphuric acid rains down on volcanic plains. One just needs to look to the skies…

At about 50 km to 60 km above the surface, the upper reaches of the Venusian troposphere, the environment is quite different. At these high altitudes the temperature is in our comfort zone of 0°C to 50°C, and the air pressure similar as habitable regions of Earth.

An atmosphere rich in carbon dioxide (96.5%) and abundant solar radiation, the conditions are ideal for photosynthesis. One could imagine solar energy powered crafts could easily sustain ecosystems where the ideal conditions for photosynthesis ensure an abundant source of food and oxygen for inhabitants. The solar energy here is abundant and in all directions — the high reflectivity of clouds below causes the amount of light reflected upward to be nearly the same as that coming in from above, with an upward solar intensity of 90% — so aircraft would not need to concern about electricity or energy consumption. Indeed, that energy would not even be needed to keep the craft airborne — as the oxygen store would also double up as a natural lifting agent for such aircrafts, as in the Venusian atmosphere of carbon dioxide, oxygen is a lifting gas — in the same way helium is a lifting gas on Earth. With temperature, pressure, gravity, and a constant source of food and oxygen via plant growth all accounted for, not to mention close proximity to Earth, waste & water recycling would be the main challenge for the permanence of such Venusian aircraft — where the initial establishment of a balanced ecosystem is key. The engineering challenge would be far less than that of establishing a colony or base on Mars. Just don’t look down!

Quantum Metamaterial and the Feasibility of Invisiblity Cloaks

Meta-materials — materials that have been engineered to have properties that absolutely do not exist in nature — such as negative refraction — are unraveling interesting possibilities in future engineering. The discovery of negative refraction has led to the creation of invisibility cloaks, for example, which seamlessly bend light and other electromagnetic radiation around an object, though such are normally restricted to cumbersome laboratory experiments with split-ring resonators and/or restricted to an insufficient slice of spectrum.

A recent article in ExtremeTech drew attention to the world’s first quantum meta-material, created recently by a team of German material scientists at the Karlsruhe Institute of Technology. It is believed such quantum meta-material can overcome the main problem with traditional meta-materials based on split-ring resonators, which can only be tuned to a small range of frequencies and not conducive to operate across a useful slice of spectrum. While fanciful applications such as quantum birefringence and super-radiant phase transitions are cited it is perhaps invisibility cloaks that until very recently seemed a forte of science fiction.

Breakthroughs at the National Tsing-Hua University in Taiwan have also made great strides in building quantum invisibility cloaks, and as the arXiv blog on TechnologyReview recently commented ‘invisibility cloaks are all the rage these days’. With such breakthroughs, these technologies may soon find mass take-up in future consumer products & security, and also have abundant military uses — where it may find the financial stimulus to advance the technology to its true capabilities. Indeed researchers in China have been looking into how to mass-produce invisibility cloaks from materials such as Teflon. We’ll all be invisible soon.

[1] The first quantum meta-material raises more questions than it answers
http://www.extremetech.com/extreme/168060-the-first-quantum-…it-answers

[2] Quantum Invisibility Cloak Hides Objects from Reality
http://www.technologyreview.com/view/516006/quantum-invisibi…m-reality/

[3] Hide the interior region of core-shell nano-particles with quantum invisible cloaks
http://www.arxiv.org/abs/1306.2120

[4] Chinese Researchers Make An Invisibility Cloak For Mass Production
http://www.technologyreview.com/view/519166/chinese-research…5-minutes/

Peer-to-Peer Science: The Century-Long Challenge to Respond to Fukushima

Peer-to-Peer Science

The Century-Long Challenge to Respond to Fukushima

Emanuel Pastreich (Director)

Layne Hartsell (Research Fellow)

The Asia Institute

More than two years after an earthquake and tsunami wreaked havoc on a Japanese power plant, the Fukushima nuclear disaster is one of the most serious threats to public health in the Asia-Pacific, and the worst case of nuclear contamination the world has ever seen. Radiation continues to leak from the crippled Fukushima Daiichi site into groundwater, threatening to contaminate the entire Pacific Ocean. The cleanup will require an unprecedented global effort.

Initially, the leaked radioactive materials consisted of cesium-137 and 134, and to a lesser degree iodine-131. Of these, the real long-term threat comes from cesium-137, which is easily absorbed into bodily tissue—and its half-life of 30 years means it will be a threat for decades to come. Recent measurements indicate that escaping water also has increasing levels of strontium-90, a far more dangerous radioactive material than cesium. Strontium-90 mimics calcium and is readily absorbed into the bones of humans and animals.

The Tokyo Electric Power Company (TEPCO) recently announced that it lacks the expertise to effectively control the flow of radiation into groundwater and seawater and is seeking help from the Japanese government. TEPCO has proposed setting up a subterranean barrier around the plant by freezing the ground, thereby preventing radioactive water from eventually leaking into the ocean—an approach that has never before been attempted in a case of massive radiation leakage. TEPCO has also proposed erecting additional walls now that the existing wall has been overwhelmed by the approximately 400 tons per day of water flowing into the power plant.

But even if these proposals were to succeed, they would not constitute a long-term solution.

A New Space Race

Solving the Fukushima Daiichi crisis needs to be considered a challenge akin to putting a person on the moon in the 1960s. This complex technological feat will require focused attention and the concentration of tremendous resources over decades. But this time the effort must be international, as the situation potentially puts the health of hundreds of millions at risk. The long-term solution to this crisis deserves at least as much attention from government and industry as do nuclear proliferation, terrorism, the economy, and crime.

To solve the Fukushima Daiichi problem will require enlisting the best and the brightest to come up with a long-term plan to be implemented over the next century. Experts from around the world need to contribute their insights and ideas. They should come from diverse fields—engineering, biology, demographics, agriculture, philosophy, history, art, urban design, and more. They will need to work together at multiple levels to develop a comprehensive assessment of how to rebuild communities, resettle people, control the leakage of radiation, dispose safely of the contaminated water and soil, and contain the radiation. They will also need to find ways to completely dismantle the damaged reactor, although that challenge may require technologies not available until decades from now.

Such a plan will require the development of unprecedented technologies, such as robots that can function in highly radioactive environments. This project might capture the imagination of innovators in the robotics world and give a civilian application to existing military technology. Improved robot technology would prevent the tragic scenes of old people and others volunteering to enter into the reactors at the risk of their own wellbeing.

The Fukushima disaster is a crisis for all of humanity, but it is a crisis that can serve as an opportunity to construct global networks for unprecedented collaboration. Groups or teams aided by sophisticated computer technology can start to break down into workable pieces the immense problems resulting from the ongoing spillage. Then experts can come back with the best recommendations and a concrete plan for action. The effort can draw on the precedents of the Intergovernmental Panel on Climate Change, but it must go far further.

In his book Reinventing Discovery: The New Era of Networked Science, Michael Nielsen describes principles of networked science that can be applied on an unprecedented scale. The breakthroughs that come from this effort can also be used for other long-term programs such as the cleanup of the BP Deepwater Horizon oil spill in the Gulf of Mexico or the global response to climate change. The collaborative research regarding Fukushima should take place on a very large scale, larger than the sequencing of the human genome or the maintenance of the Large Hadron Collider.

Finally, there is an opportunity to entirely reinvent the field of public diplomacy in response to this crisis. Public diplomacy can move from a somewhat ambiguous effort by national governments to repackage their messaging to a serious forum for debate and action on international issues. As public diplomacy matures through the experience of Fukushima, we can devise new strategies for bringing together hundreds of thousands of people around the world to respond to mutual threats. Taking a clue from networked science, public diplomacy could serve as a platform for serious, long-term international collaboration on critical topics such as poverty, renewable energy, and pollution control.

Similarly, this crisis could serve as the impetus to make social networking do what it was supposed to do: help people combine their expertise to solve common problems. Social media could be used not as a means of exchanging photographs of lattes and overfed cats, but rather as an effective means of assessing the accuracy of information, exchanging opinions between experts, forming a general consensus, and enabling civil society to participate directly in governance. With the introduction into the social media platform of adequate peer review—such as that advocated by the Peer-to-Peer Foundation (P2P)—social media can play a central role in addressing the Fukushima crisis and responding to it. As a leader in the P2P movement, Michel Bauwens, suggests in an email, “peers are already converging in their use of knowledge around the world, even in manufacturing at the level of computers, cars, and heavy equipment.”

Here we may find the answer to the Fukushima conundrum: open the problem up to the whole world.

Peer-to-Peer Science

Making Fukushima a global project that seriously engages both experts and common citizens in the millions, or tens of millions, could give some hope to the world after two and a half years of lies, half-truths, and concerted efforts to avoid responsibility on the part of the Japanese government and international institutions. If concerned citizens in all countries were to pore through the data and offer their suggestions online, there could be a new level of transparency in the decision-making process and a flourishing of invaluable insights.

There is no reason why detailed information on radiation emissions and the state of the reactors should not be publicly available in enough detail to satisfy the curiosity of a trained nuclear engineer. If the question of what to do next comes down to the consensus of millions of concerned citizens engaged in trying to solve the problem, we will have a strong alternative to the secrecy that has dominated so far. Could our cooperation on the solution to Fukushima be an imperative to move beyond the existing barriers to our collective intelligence posed by national borders, corporate ownership, and intellectual property concerns?

A project to classify stars throughout the university has demonstrated that if tasks are carefully broken up, it is possible for laypeople to play a critical role in solving technical problems. In the case of Galaxy Zoo, anyone who is interested can qualify to go online and classify different kinds of stars situated in distant galaxies and enter the information into a database. It’s all part of a massive effort to expand our knowledge of the universe, which has been immensely successful and demonstrated that there are aspects of scientific analysis that does not require a Ph.D. In the case of Fukushima, if an ordinary person examines satellite photographs online every day, he or she can become more adept than a professor in identifying unusual flows carrying radioactive materials. There is a massive amount of information that requires analysis related to Fukushima, and at present most of it goes virtually unanalyzed.

An effective response to Fukushima needs to accommodate both general and specific perspectives. It will initially require a careful and sophisticated setting of priorities. We can then set up convergence groups that, aided by advanced computation and careful efforts at multidisciplinary integration, could respond to crises and challenges with great effectiveness. Convergence groups can also serve as a bridge between the expert and the layperson, encouraging a critical continuing education about science and society.

Responding to Fukushima is as much about educating ordinary people about science as it is about gathering together highly paid experts. It is useless for experts to come up with novel solutions if they cannot implement them. But implementation can only come about if the population as a whole has a deeper understanding of the issues. Large-scale networked science efforts that are inclusive will make sure that no segments of society are left out.

If the familiar players (NGOs, central governments, corporations, and financial institutions) are unable to address the unprecedented crises facing humanity, we must find ways to build social networks, not only as a means to come up with innovative concepts, but also to promote and implement the resulting solutions. That process includes pressuring institutions to act. We need to use true innovation to pave the way to an effective application of science and technology to the needs of civil society. There is no better place to start than the Internet and no better topic than the long-term response to the Fukushima disaster.

Originally published in Foreign Policy in Focus on September 3, 2013

Space-Mining For Our Fastest Depleting Resource: Helium

Most of us know helium as that cheap inert lighter-than-air gas we use to fill party balloons and inhale to increase voice-pitch as a party trick for kids. However, helium has much more important uses to humanity — from medical (e.g. MRIs), military and defense (submarine detectors use liquid helium to clean up noisy signals), next-generation nuclear reactors, space shuttles, solar telescopes, infra-red equipment, diving, arc welding, particle physics research (the super-magnets in particle colliders rely on liquid helium), the manufacture of many digital devices, growing silicon crystals, the production of LCDs and optical fibers [1].

The principal reason helium is so important is due to its ultra-low boiling-point and inert nature making it the ultimate coolant of the human race. As the isotope helium-3, helium is also used in nuclear fusion research [2]. However, our Earth supplies of helium are being used at an unprecedented rate and could be depleted within a generation [4] and at the current rate of consumption we will run out within 25 to 30 years. As the gas is often thought of as a cheap gas it is often wasted. However, those who understand the situation, such as Prof Richardson, co-chair of a recent US National Research Council inquiry into the coming helium shortage, warn that the gas is not cheap due to the supply being inexhaustible, but because of the Helium Privatisation Act passed in 1996 by the US Congress.

Helium only accounts for 0.00052% of the Earth’s atmosphere and the majority of the helium harvested comes from beneath the ground being extracted from minerals or tapped gas deposits. This makes it one of the rarest elements of any form on the planet. However, the Act required the helium stores [4] held underground near Amarillo in Texas to be sold off at a fixed rate by 2015 regardless of the market value, to pay off the original cost of the reserve. The Amarillo storage facility holds around half the Earth’s stocks of helium: around a billion cubic meters of the gas. The US currently supplies around 80 percent of the world’s helium supplies, and once this supply is exhausted one can expect the cost of the remaining helium on Earth to increase rapidly — as this is in all practicality quite a non-renewable resource.

There is no chemical way of manufacturing helium, and the supplies we have originated in the very slow radioactive alpha decay that occurs in rocks. It has taken 4.7 billion years for the Earth to accumulate our helium reserves, which we will have exhausted within about a hundred years of the US’s National Helium Reserve having been established in 1925. When this helium is released to the atmosphere, in helium balloons for example, it is lost forever — eventually escaping into space [5][6]. So what shall we do when this crucial resource runs out? Well, in some cases liquid nitrogen (−195°C) may be adopted as a replacement — but in many cases liquid nitrogen cannot be used as a stand alone coolant as tends to be trickier to work with (triple point and melting point at around −210°C) — so the liquid helium is used because it is capable of staying liquid at the extreme cool temperatures required. No more helium means no more helium liquid (−269°C) that is used to cool the NMR (nuclear magnetic resonance apparels), and in other machines such as MRI scanners. One wonders therefore must we look towards space exploration to replenish our most rare of resources on Earth?

Helium is actually the second most abundant resource in the Universe, accounting for as much as 24 percent of the Universe’s mass [7] — mostly in stars and the interstellar medium. Mining gas giants for helium has been proposed in a NASA memorandum on the topic [8] which have also have great abundance of this gas, and it has been suggested that such atmospheric mining may be easier than mining on the surfaces of outer-planet moons. While this had focused on the possibility of mining Helium-3 from the atmosphere of Jupiter, with inherent complications of delta-V and radiation exposure, a more appropriate destination for mining regular helium may rest with the more placid ice-giant Uranus (not considered in the memorandum as the predicted concentration of Helium-3 in the helium portion of the atmosphere of Uranus is quite small). Leaving aside specific needs for Helium-3 which can be mined in sufficient volume much closer — on our Moon [9], a large-scale mining mission to Uranus for the more common non-radioactive isotope could ensure the Earth does not have to compromise so many important sectors of modern technology in the near future due to an exhaustion of our helium stock. A relatively lower wind speed (900 km/h, comparing favorably to 2,100 km/h on Neptune), with a lower G-force (surface gravity 0.886 g, escape velocity 21.3 km/s) [10] and an abundance of helium in its atmosphere (15 ± 3%) could make it a more attractive option, despite the distances (approx 20 AU), extreme cold (50-70K) and radiation belts involved. Rationalising complexities in radiation, distance, time and temperatures involved for human piloting of such a cargo craft, it could be considered more suited to an automated mission, remote-controlled under robotics similar to orbiter probes — even though this would introduce an additional set of challenges — in AI and remote control.

However, we have a Catch 22 — NASA space programs use the gas to aid their shuttles [12]. Liquid fuels are volatile. They are packed with corrosive material that could destroy a spacecraft’s casing. To avoid this problem, a craft is filled with helium gas. If this could be replaced in such shuttles with some alternative, and advances in space transportation made to significantly increase the cargo of such ships over interplanetary-distances, perhaps a case could be made for such ambitious gas mining missions, though at present given current NASA expenditure, this would seem like fantasy [13]. Realistic proposals for exploration of Uranus [14] fall far short of these requirements. Helium is a rare and unique element we need for many industrial purposes, but if we don’t conserve and recycle our helium, we are dooming mankind to a future shortage of helium, with little helium left for future generations here on Earth [15] — as for now, replenishing such from space seems like a rather long shot.

————————————————

[1] 8 Surprising High-Tech Uses for Helium — TechNewsDaily
http://www.technewsdaily.com/5769-8-surprising-high-tech-helium.html
[2] Helium-3 as used in Nuclear Fusion Research
http://en.wikipedia.org/wiki/Helium-3
[3] The world is running out of helium — Nobel prize winner Prof Robert Richardson.
http://phys.org/news201853523.html#jCp
[4] The Federal Helium Reserve
http://www.blm.gov/nm/st/en/prog/energy/helium/federal_helium_program.html
[5] Why the World Will Run Out of Helium
http://scienceblogs.com/startswithabang/2012/12/12/why-the-w…of-helium/
[6] Will We Run Out of Helium?
http://chemistry.about.com/b/2012/11/11/will-we-run-out-of-helium.htm
[7] Where Is Helium Found — Universe Today

Where is Helium Found


[8] Bryan Palaszewski. “Atmospheric Mining in the Outer Solar System“
http://www.grc.nasa.gov/WWW/RT/2005/RT/RTB-palaszewski1.html
[9] Mining the Moon for Helium-3 — RocketCitySpacePioneers
http://www.rocketcityspacepioneers.com/space/mining-the-moon-for-helium-3
[10] Uranus — Physical characteristics
http://en.wikipedia.org/wiki/Uranus
[11] Uranus’s Magnetosphere — NASA Voyager VPL
http://voyager.jpl.nasa.gov/science/uranus_magnetosphere.html
[12] Space shuttle use of propellants and fluids — NASA KSC
http://www-pao.ksc.nasa.gov/kscpao/nasafact/pdf/ssp.pdf
[13] Project Icarus: The Gas Mines of Uranus
http://news.discovery.com/space/project-icarus-helium-3-mining-uranus-110531.htm
[14] The case for a Uranus orbiter, Mark Hofstadter et al.
http://www.lpi.usra.edu/decadal/opag/UranusOrbiter_v7.pdf
[15] Why the World Will Run Out of Helium
http://scienceblogs.com/startswithabang/2012/12/12/why-the-w…of-helium/

Neo-Democracy: The Evolution of the Democratic Republic

Neo-Democracy: The Evolution of the Democratic Republic

Dustin Ashley

Abstract

This essay presents a new political paradigm based upon concepts that originate from direct democracy, meritocracy, technocracy, and egalitarian ideology. I systematically redesign the common political system to where these concepts can complement each other and work as a synergistic whole. The main idea is to recreate the direct democratic system made famous by the ancient Athenians while repurposing it for use in this current era in human history and for many generations to come.

1. Introduction

Karl Marx wrote that, “The history of all hitherto existing society is the history of class struggles.”(Marx and Engels 1848) This is true in the case of many rising world powers where the rich often take advantage of the working class. For example, the American Gilded Age sets the example for what happens when laissez-faire liberalism becomes rampant. During this era, politicians set up “political machines” to keep them and whoever they’re aligned with in office for as long as they wish. This occurred while companies began to take control of single markets and created monopolies where they were able to do whatever they pleased. One major proponent of this version of free-market economy was William Graham Sumner, whose book What Social Classes Owe to Each Other (1884) agreed with laissez-faire while being against granting assistance to the poor. This type of philosophy was one major reason for the rise of plutocracy and corporatocracy that still resonates through America to this day.

To keep this from happening again, emerging nations must learn from these past follies and make sure that they aren’t repeated. In order to prevent such a system from occurring, a form of government must be set up where every person has equal opportunity to a nation’s resources while being rendered unable to usurp someone’s ability to obtain similar resources. This includes enacting a government that is based on putting people in specific offices that only deserve it by proving themselves worthy via an administered exam while solving national issues with problem solving strategies a la the scientific method. With a 21st century mindset and the aid of our finest technology, we can create a more efficient and practical form of government than before.

2. Basic Political Structure

This new political paradigm is a technologically aided form of direct democracy that consists of elements from technocracy, meritocracy, and egalitarian ideology. Its main ideology comes from Athenian democracy, where they did not vote on representatives but rather voted on their behalf. Even though they didn’t grant suffrage to women, slaves, children, and immigrants, they had no set reference regarding class and often participated in large groups. These aspects can be applied to this paradigm; in which there are no representatives and that anybody of any class can participate.

In addition, the use of technology can be used to supplement the political process and improve government to its highest state of efficiency. This includes using the Internet and enabling citizens to become more active in making decisions for their government. Such claims can be made evident by Ann Macintoch, who coined the term “E-Democracy” for the use of technology as a supplement to democracy. She states that, “E-democracy is concerned with the use of information and communication technologies to engage citizens, support the democratic decision- making processes and strengthen representative democracy.” (Macintoch 2006) Not only does this allow for a more active participation in political affairs, this can also lead to more efficient solutions to troubling problems. When technology is spliced with democracy, it is possible that democracy can evolve as technology does.

It is important that every citizen is given equal opportunity to pursue their interests without the lingering fear that something will inhibit them from achieving their goals. It is in egalitarian thought that every person deserves an equal chance, regardless of their form, ethnic background, nor intellect. This is true in both the works of Karl Marx and John Locke. John Locke states that all people were created equal and that everyone had a natural right to defend his “Life, health, Liberty, or Possessions.” (Locke 1690) On the other hand, Karl Marx believed that there should be an equal distribution of a nation’s wealth to every citizen. Even though their philosophies differ, they both had a view on egalitarianism that is still relevant today. When the wealth can be distributed equally to everyone while everybody has their ability to defend their basic human rights, there lays the key to an egalitarian society.

With the synergistic combination of egalitarianism and technological democracy, you will find technocracy. This peculiar form of government relies on a nation’s leaders to be scientists, engineers, and others with compatible skills and not politicians and businessmen. (Berndt 1982) These technocrats use the scientific method when approaching social problems rather than political or philosophical implementation. These people are voted in by who is most qualified and not by who has the most money or most connected. This form of government is partially implemented in the Communist Party of China since most of their leaders are engineers. The Five-year plans of the People’s Republic of China have enabled them to plan ahead in a technocratic fashion to build projects such as the National Trunk Highway System, the China high-speed rail system, and the Three Gorges Dam. (Andrews 1995) In implementing technocracy into a nation’s government, it is possible for the nation to become prolific and prosperous.

3. The Voting Masses

The voting masses represent every individual that is eligible to vote, for as long as they are a free person and of age to make a responsible choice. Whereas age is a subjective requirement and is open to discussion, a free individual is one that is not incarcerated. The voting masses do not have any political nor governmental responsibilities and may vote if they choose to do so. There are no requirements and they possess the majority of the political power. This is evident in their ability to influence their nation by approving or denying any laws that are presented to them. In summation, every individual has the choice to be involved in their nation’s government as much or as little as they want.

4. EDD

All new sovereigns and bills must be approved by the voting masses before such actions are enacted. This is made possible through a form of direct democracy called electronic direct democracy, or EDD. This allows for the common people to be involved in the legislative process and nullifies the necessity for a legislative branch in government. The Florida Institute of Technology is currently researching and developing the technology that supports EDD, while implementing it in their student organizations. (Kattamuri et al 2005) If proven successful, this further dissolves the need for a representative democracy while giving more power to the common people.

5. Sovereigns of the State

The Sovereigns of the State are a group of individuals who coordinate the different aspects of a nation while addressing the needs of the people. While there are numerous roles that a sovereign must fulfill, this problem can be solved by having multiple sovereigns that work together ad hoc. Each sovereign will have a different duty to fulfill and must do so in an effective and productive manner for the sake of the nation. This includes:

  • Sovereign of the Military:

The Sovereign of the Military, or High General, is responsible for commanding the nation’s military during times of war. The individual has the capability to address the nation and declare war but it must be approved by the voting masses for the declaration to be enacted. The High General regulates the military and makes sure that the nation is prepared for when an attack is immanent. In order to become the Sovereign of the Military, one must be an experienced soldier of high rank that understands battlefield tactics and can lead the nation during times of war.

  • Sovereign of the Consensus:

The Sovereign of the Consensus, or Head Chairman, plays a dormant role as a peacekeeper during times when new sovereigns are voted in. The Head Chairman also serves as a tiebreaker for when a stalemate occurs during the voting process.

  • Sovereign of Energy:

The Sovereign of Energy focuses on energy production and distribution while overseeing the development of more efficient energy sources.

  • Sovereign of Treasury:

The Sovereign of Treasury, or National Economist, focuses on financial or monetary matters and is in charge of manufacturing currency. The National Economist is responsible for formulating economic and tax policies and managing public debt. The National Economist must hold a high degree in economics and has experience in financial matters.

  • Sovereign of Education:

The Sovereign of Education, or National Educator, is responsible for education policies in public schools and institution accreditation. The National Educator must have a degree in education with experience in teaching at both public schools and universities.

  • Sovereign of Foreign Affairs:

The Sovereign of Foreign Affairs, or Chief Diplomat, is responsible for maintaining stable relations with other nations and other diplomatic duties. The Chief Diplomat is also responsible for issues pertaining to foreign policy. In order to become eligible for this position, one must have experience with matters dealing with diplomacy and foreign affairs.

  • Sovereign of Labour:

The Sovereign of Labour enforces laws involving unions, the workplace, and any business-person interactions. This also includes maintaining minimal unemployment within the nation.

  • Sovereign of National Affairs:

The Sovereign of National Affairs is responsible for issues pertaining to land management, landmark preservation, natural disaster response, immigration policies, and law enforcement policies.

  • Sovereign of Human Services:

The Sovereign of Human Services, or Head Physician, is responsible for issues concerning disease control, advancement in medical technologies, final approval of pharmaceutical drugs and medicines, food safety and management, nutrition, and welfare. To be eligible for this position, the aspirant must have a medical degree and experience in the medical field.

  • Judicial Sovereign:

The Judicial Sovereign is responsible for reviewing all bills before they are enacted as laws. This includes making sure they do not go against the principles written down in the nation’s primary social contract i.e. the constitution. The Judicial Sovereign also serves as Head Judge during trials that are considered high crimes, such as murder and fraud. To be eligible for this position, the applicant must be already a licensed attorney and/or judge with experience in legal matters.

These sovereigns can only be placed into office by merit alone and not placement within the community. This is done by giving them time to place distribute a list of their accomplishments and their criminal record. During this time period, the voting masses can decide who they believe is fit for the job. These actions are to ensure that the voting masses are voting into office those whom they think are fit for the positions and not by “popular vote”. To further ensure that the applicants are not committing acts of fraud, their paperwork is first reviewed by a group of volunteers that can verify the authenticity of the applicants and their paperwork. The identity of the volunteers is kept anonymous to ensure that they cannot be bribed or intimidated by the applicants. The volunteers form a discipline-specific administration system and are not under the influence of any focus group. In order to be selected, they must show that they are experts in their selected field and are not already under any influence.

6. Judicial System Within The Political System

In a governmental sense, the judicial system is used to declare whether a bill is protected by the nation’s social contract or if it goes against. Typically, if a bill goes against the social contract then it will be vetoed and terminated. The judicial branch serves as a “political buffer” between the legislative and executive branches. This gives the leaders within the judicial branch much power. In the case for this framework, the judicial branch works as a mediator between the voting masses and the sovereigns. To keep matters fair, the members of the judicial branch are to be impartial and fair towards both sides.

7. Conclusion

This new political paradigm serves only as a framework for any political system and not as a system in itself. It can be modified, expanded, or condensed as needed as long as the main idea is not lost. This may serve as the next step in constructing a new political system based on progressive thought and pro-technology ideology. Whether it serves as a theoretical concept or someone applies these ideas to their organization, this concept is meant for anyone to read.

Works Cited

  1. Marx, K., and Engels, F. 1848. The Communist Manifesto
  2. Sumner, W.G. 1884. What Social Classes Owe to Each Other
  3. Machintoch, A. 2006. Characterizing E-Participation in Policy-Making
  4. Locke, J. 1690. Second Treatise of Government
  5. Berndt, E.R. 1982. From Technocracy to Net Analysis: Engineers, Economists, And Recurring Energy Theories of Value. Studies in Energy and the American Economy, Discussion Paper No. 11
  6. Andrews, J. 1995. Rise of the Red Engineers
  7. Kattamuri, S. et. al. 2005. Supporting Debates Over Citizen Initiatives

The Benefit of Specialization – Bitcoin as an Invented Currency

Originally posted as Part II of a four-part introductory series on Bitcoin on May 7, 2013 in the American Daily Herald. See the Bitcoin blog for all four articles.

The emergence of money and its importance in enabling trade between people has been well researched and documented in the literature of the Austrian School of economics – Theory of Money and Credit by Ludwig von Mises and Man, Economy and State by Murray N. Rothbard being prime examples. The contribution of the Austrian greats to the understanding of money and its origin made clear exactly what money is (e.g. the most marketable commodity), the different types of media that are employed in exchange between people (e.g. commodity money, credit money, fiat money and money substitutes) and a theoretical explanation for their origin (the Regression Theorem). The Austrian School has also given arguably the most convincing analysis of the relationship between the money type in use, the manner by which it is controlled and the business cycle – emphasizing the importance of sound money. But except for a few sparse outliers, what the Austrian School has yet to do is fully recognize Bitcoin as a valid scholarly and academic topic. With this article, I hope to contribute to its recognition.

Money’s characteristics

Money enabled people in early stages of civilization to go from direct exchange, with difficulties such as the double-coincidence of wants, to indirect exchange. This improved mechanism paved the way for facilitating man’s specialization in his tasks, thereby enabling division of labor within society since each specialized laborer was able to trade his goods for others indirectly with the use of a medium of exchange. Money has taken many forms but there are certain characteristics all forms should have. Aristotle, for instance provided the following four:

  1. Durable – The item must remain usable and retain its characteristics, for which it is valued, over long periods of time (e.g. shouldn’t fade, corrode, rot, etc).
  2. Portable – One should be able to carry it upon their person. A related point is that it would be desirable to have a high value per unit weight, making large quantities portable too.
  3. Divisible – By having uniformity of quality or homogeneity, the item should retain its characteristics when divided into smaller parts or when recombined to a larger unit. Thus, a similar point is the fungibility of the item, meaning that the units can be substituted for one another.
  4. Intrinsically Valuable – The intended meaning is that it should have value as a commodity regardless of its property as money, although as I argued in a previous article, value is subjective and therefore extrinsic to the item, so it cannot in itself be intrinsically valuable. A related point is that the item, ideally, would be rare and certainly not subject to unlimited reproducibility – meaning it should be scarce.

Though Aristotle did not specifically mention fungibility, scarcity or other points such as recognizability, stability of supply, malleability etc., these points generally cover the qualities of good money. The fact that there are monies out there (e.g. fiat money) that so blatantly lack an important characteristic (e.g. not being subject to infinite reproducibility) makes the Mises Regression Theorem so interesting, in that it explains how such a money came about.

Man’s desire for convenience

Mises defined money, in its narrower sense, as taking three forms: commodity money; credit money; and fiat money. In its broader sense, money substitutes like fiduciary media are also used. Of all these forms of money, the most convenient are fiat money, credit money and money substitutes. These forms can be represented by pieces of paper (e.g. banknotes or contract) and therefore, as long as there is trust in the issuing entity or in the counterparty, these monetary forms will be accepted ‘as good as’ the money that backs them or the money that is promised in the contract. Banknotes, token money and the like stemmed from the fact that the common man did not want to store large amounts of precious commodities in his home nor carry it on his person. Banks stored the commodities and issued redeemable notes instead. Let’s face it, humans choose the path of least resistance and so convenience is desirable.

The unfortunate situation that arose is that when banks (or their ‘money warehouse’ predecessors) realized that not everyone wants all of their stored gold at once, they started issuing multiple banknotes backed by the same unit of money stored. This fraud has become pervasive and eventually legally licensed by the state. So while ‘hard currencies’ are good, their lack of convenience has led, as a matter of historical fact, to fractional reserve banking. This practice and the expansion of the monetary base introduce anomalies into the economy and bring about the business cycle.

Another aspect that is inherent to commodity money (and most all other money types) is that the payment system has always been separate from money. Whether carrying a bag of coins in one’s pocket or arranging for an armored van, payment requires delivery of money. Banks and clearing houses took on the role to perform this service, charging lucrative transaction fees in the process. Here, too, it became more convenient to use credit money or internet banking, where one just transfers the information about the transaction and where it is just as convenient regardless of the sum involved. No physical asset can be transferred instantaneously and without effort.

As desirable as physical commodities such as gold and silver are, the fact that they become increasingly less convenient the more you have of them has turned out to be their Achilles’ heel.

Division of labor and specialization of tools

As we can see from the above arguments, while commodity money has been the soundest of options, it is not without its flaws. But, remember, this is not an unusual phenomenon. Being self-sufficient and growing one’s own food is also prudent, yet most will concur it has its disadvantages. Humans have discovered that division of labor and specialization makes everyone better off. Specialization, though, is most effective when the tools one uses are also custom-made for the task at hand. Imagine using a gardening trowel as a ladle for your soup, or a battle axe as a butcher knife… This is a facetious comment, to be sure, but why then must we ‘make do’ with an ornamental commodity or a block of highly conductive metal as money? Humans once used flint to start fires because that is what nature provided. Surely, we agree a lighter is much better. Why should we not seek to invent a tool to facilitate monetary transactions, call it money, which would cover the characteristics noted above (Aristotelian or others) as ideally as can be? Then just set it free and see if it acquires value through a catallactic process, much like gold and silver did in the past. As Rothbard said in relation to gold: “If gold, after being established as money, were suddenly to lose its value in ornaments or industrial uses, it would not necessarily lose its character as a money”. If one invents money and it establishes itself, who cares if it has no other purpose?

Whether for the reason of making a more perfect money or just to make a digital form of it, an unknown hacker (or group of hackers) brilliantly devised a new money — Bitcoin. We see that it has already acquired some value and a quick search will show an ever increasing number of businesses willing to trade in Bitcoin. It is already a medium of exchange for a growing number of countercultures. Whether it continues to gather momentum is an empirical question, one for which only time has the answer. But let us not forget this is a free market phenomenon. Nothing about its ownership, mining or its use violates private property rights. As with any good on a truly free market – the only test it must withstand is the test of marketability and popularity within the confines of the non-aggression principle and private property rights.

But does it serve customers’ needs?

By far the best and most academically rigorous description of Bitcoin I’ve seen has been given by Peter Šurda in his Master’s thesis. Konrad Graf has also written extensively on the subject with clarity and insight. I will not do justice to arguments they put forward, but will share their opinion that Bitcoin has superior qualities as it relates to the characteristics of money.

  1. Durable – Bitcoin can exist in any number of forms, be it physical or intangible (yes, you can actually have a Bitcoin coin or card). It can be printed on paper or committed to memory. But at its core, it is abstract and can be made to be as secure as the network it depends upon. Its peer-to-peer nature makes it all but impossible for governments to shut down.
  2. Portable – If it exists in its intangible form, there is nothing more portable than 1s and 0s. A million bitcoins weigh as much as a millionth of one. It is also the most easily transportable good – no shipping costs, insurance, etc. It is, after all, its own payment system. In fact, it is so portable, you can carry backup copies with you or trusted parties, hidden in USB keys and on anonymous servers – this is the only form of money that could pose an insurmountable challenge to those wishing to confiscate your money.
  3. Divisible – Each coin is divisible into 100 million smaller units, meaning that even if each bitcoin rises to $1 million each, we would still have the equivalent of a penny. Likewise, Bitcoin is perfectly fungible.
  4. Scarce (Intrinsically Valuable) – Bitcoin is rare (total quantity will not exceed 21 million units) and is not subject to unlimited reproducibility. This is by design and due to its complete decentralization, there is no one entity that can override this characteristic.

Šurda additionally showed how it was superior in logistics, manipulation, authentication, transaction costs of property rights, counter-party risk, and others. Graf noted its superiority also in purchasing power and stability of supply, lending itself to become a catalyst for deflation (in the good way). And since Bitcoin isn’t a raw material in creating other products, premiums aren’t charged for industry use. Therefore, no other lower-order goods’ costs of production are affected by its potentially ever-increasing value in a deflationary environment.

Best of all, this is a commodity money that does not need money substitutes and it doubles as its own payment system. This leads to two very desirable outcomes. Banks would no longer be needed as ‘money warehouses’. Individuals could store their own bitcoins, much like they store their cat photos on their hard drives or online. Many libertarians wonder how to make fractional reserve banking illegal. You would not need to – it would come about naturally since this ‘service’ would no longer be required. Banks’ role in the business of transferring money would dwindle as well, since paying in Bitcoin is as easy as sending an email. Transaction fees and capital controls would become a thing of the past. Banks would therefore revert to providing useful services, such as pairing up those who want a loan with those who have money to lend. They would finally be forced to innovate, same as businesses across the spectrum have been doing since the dawn of civilization.

Conclusion

Bitcoin has the making of becoming money in its own right. As of now it is a medium of exchange for a limited group of individuals, but it has already acquired value and is already being purchased for its exchange value. Bitcoin is a free market phenomenon. The value it has was not forced upon anyone and its use is not protected by legal decree.

Professor Hoppe notes the following: “Economic theory has nothing to say as to what commodity will acquire the status of money. Historically, it happened to be gold. But if the physical make-up of our world would have been different or is to become different from what it is now, some other commodity would have become or might become money. The market will decide.”

What becomes money is indeed an empirical question that we can only analyze with hindsight. From the great work done by the economists quoted above, we can uncover another empirical fact – that money has arisen in a spontaneous manner, through the evolution of successive generations of human actors. But let us not conclude that money can only arise spontaneously – it can be purposefully invented and then left to the market to adopt or reject. What we are witnessing is the adoption of a new invented form of money. Money, after all, is a tool to facilitate economic transactions. We must accept – and build into our theories – the possibility of using a tool that is custom-built for this purpose. We must not merely denounce a form of money because its building blocks are not naturally provided or because it does not have other uses. If it meets and exceeds all of the characteristics of money, if it adheres to principles of economic scarcity and decentralization, and if actors on the free market see the value in it and freely exchange goods and services for it – we need to accept this, too, as having potential of being included in our books and scholarly articles alongside the time-honored alternatives. Let us have academic debates about its practical or economic merits and flaws. This is not a winner-take-all situation. Competition in currencies is just as valuable as competition in other areas. Let’s just remember that Bitcoin could well help us achieve a better and freer society with a sounder economy.

From New Physics to New Weapons Technologies

My paper “New Evidence, Conditions, Instruments & Experiments for Gravitational Theories” was finally published by the Journal of Modern Physics, Vol. 8A, 2013. That is today Aug 26, 2013.

Over the last several years I had been compiling a list of inconsistencies in modern contemporary physics. This paper documents 12 inconsistencies. If I’m correct there will sooner or later, be a massive rewrite of modern physical theories, because I do not just criticize contemporary theories but critique them, i.e. provide positive suggestions based on empirical data, on how our theories need to be modified.

The upshot of all this is that I was able to propose two original, new experiments, never before contemplated in physics journals. Both involve new experimental devices, and one is so radically new that it is unthinkable. This is the gravity wave *telescope*.

The new physics lends itself to a new and different forms of weaponizations achievable within the next few decades, with technologies *not* predicted in science fiction. How about that?

I have deliberately left this weaponization part vague because I want to focus on the propulsion technologies. Definitely not something string or quantum-gravity theories can even broach.

We will achieve interstellar travel in my lifetime, and my paper points to where to research this new physics and new technologies.

Paper Details:

Title: New Evidence, Conditions, Instruments & Experiments for Gravitational Theories

Author: Benjamin T Solomon

Journal: Journal of Modern Physics, 2013, Vol 8A

Journal Link (2013, Vol 8A): http://www.scirp.org/journal/JMP/

Paper Links: http://www.scirp.org/journal/PaperDownload.aspx?paperID=36276

Notes: Down load count is displayed at the paper link on the journal web page, so you can see how many people are interested in this topic.

——————————————

Benjamin T Solomon is the author of the 12-year study An Introduction to Gravity Modification

There will always be a Moon over Tokyo: Fukushima

News this past week on Fukushima has not been exactly reassuring has it. Meanwhile the pro-Nuclear lobby keep counting bananas. Here I’ve gathered together some of the recent news articles on the unfolding crisis. Interested to hear some comments on this one.

Fukushima leak is ‘much worse than we were led to believe’ / Aug 22, 2013, BBC NEWS http://www.bbc.co.uk/news/science-environment-23779561
Serious: Japan hikes Fukushima radiation danger level / August 21, 2013 RT NEWS http://rt.com/news/japan-fukushima-level-three-762/
Japan’s nuclear crisis deepens, China expresses ‘shock’ / Aug 21, 2013/ reuters http://www.reuters.com/article/2013/08/21/us-japan-fukushima…2B20130821
Worse than Chernobyl: The inner threat of Fukushima crisis / Aug 20, 2013/ RT http://rt.com/op-edge/chernobyl-fukushima-crisis-catastrophe-715/
Japan nuclear agency upgrades Fukushima alert level / Aug 21, 2013 / BBC NEWS http://www.bbc.co.uk/news/world-asia-23776345
Fukushima apocalypse: Years of ‘duct tape fixes’ could result in ‘millions of deaths’ / Aug 18 2013 / RT http://rt.com/news/fukushima-apocalypse-fuel-removal-598/
Fukushima’s Radioactive Water Leak: What You Should Know / National Geographic, Aug 2013 http://news.nationalgeographic.com/news/energy/2013/08/13080…ater-leak/

Quantum Entanglement in Future Communication Technologies

The arXiv blog on MIT Technology Review recently reported a breakthrough ‘Physicists Discover the Secret of Quantum Remote Control’ [1] which led some to comment on whether this could be used as an FTL communication channel. In order to appreciate the significance of the paper on Quantum Teleportation of Dynamics [2], one should note that it has already been determined that transfer of information via a quantum tangled pair occurs *at least* 10,000 times faster than the speed of light [3]. The next big communications breakthrough?

In what could turn out to be a major breakthrough for the advancement of long-distance communications in space exploration, several problems are resolved — where if a civilization is eventually established on a star system many light years away, for example, such as on one of the recently discovered Goldilocks Zone super-Earths in the Gliese 667C star system, then communications back to people on Earth may after all be… instantaneous.

However, implications do not just stop there either. As recently reported in The Register [5], researchers in Israel at the University of Jerusalem, have established that quantum tangling can be used to send data across both TIME AND SPACE [6]. Their recent paper entitled ‘Entanglement Between Photons that have Never Coexisted’ [7] describes how photon-to-photon entanglement can be used to connect with photons in their past/future, opening up an understanding into how one may be able to engineer technology to not just communicate instantaneously across space — but across space-time.

Whilst in the past many have questioned what benefits have been gained in quantum physics research and in particular large research projects such as the LHC, it would seem that the field of quantum entanglement may be one of the big pay-offs. Whist it has yet to be categorically proven that quantum entanglement can be used as a communication channel, and the majority opinion dismisses it, one can expect much activity in quantum entanglement over the next decade. It may yet spearhead the next technological revolution.

[1] www.technologyreview.com/view/516636/physicists-discover-the…te-control
[2] Quantum Teleportation of Dynamics http://arxiv.org/abs/1304.0319
[3] Bounding the speed of ‘spooky action at a distance’ http://arxiv.org/abs/1303.0614
[4] http://www.universetoday.com/103131/three-potentially-habita…iese-667c/
[5] The Register — Biting the hand that feeds IT — http://www.theregister.co.uk/
[6] http://www.theregister.co.uk/2013/06/03/quantum_boffins_get_spooky_with_time/
[7] Entanglement Between Photons that have Never Coexisted http://arxiv.org/abs/1209.4191

Intimations of Imitations: Visions of Cellular Prosthesis & Functionally-Restorative Medicine

In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.

In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e. ion-channels ion-pumps and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e. lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension however, and the field is to my knowledge yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion-Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purpose of the indefinite-longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.

While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 70’s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically-embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.

The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity and permeability throughout the 1980’s, 1990’s and 2000’s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and in lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically-active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].

Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpse of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to, rather than replace existing biological neurons, become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion general had already been conceived (and successfully created/experimentally-verified, though presumably not integrated in-vivo).

The field of Functionally-Restorative Medicine (and the orphan sub-field of whole-brain-gradual-substrate-replacement, or “physically-embodied” brain-emulation if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels and ion pumps) by MEMS (micro-electrocal-mechanical-systems) or more likely NEMS (nano-electro-mechanical systems).

The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g. bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e. dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological and experimental infrastructures developed for the fields of Synthetic

Ion-Channels and Ion-Channel/Artificial-Membrane-Reconstitution can be utilized for the purpose of a.) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional & operational modality to their normal biological counterparts) and/or b.) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.

Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in-vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular-prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in-vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.

A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.

For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion-Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.

First Survey

Unimolecular ion channels:

Examples include a.) synthetic ion channels with oligocrown ionophores, [5] b.) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and c.) modified varieties of the b-helical scaffold of gramicidin A [7]

Barrel-stave supramolecules:

Examples of this general class falling include avoltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].

Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:

Examples of this sub-class include synthetic ion channels formed by a.) macrocyclic, branched and linear bolaamphiphiles and dimeric steroids, [9] and by b.) non-peptide macrocycles, acyclic analogs and peptide macrocycles [respectively] containing abiotic amino acids [10].

Dimeric steroid staves:

Examples of this sub-class include channels using polydroxylated norcholentriol dimer [11].

pOligophenyls as staves in rigid rod b barrels:

Examples of this sub-class include “cylindrical self-assembly of rigid-rod b-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].

Synthetic Polymers:

Examples of this sub-class include synthetic ion channels and pores comprised of a.) polyalanine, b.) polyisocyanates, c.) polyacrylates, [13] formed by i.) ionophoric, ii.) ‘smart’ and iii.) cationic polymers [14]; d.) surface-attached poly(vinyl-n-alkylpyridinium) [15]; e.) cationic oligo-polymers [16] and f.) poly(m-phenylene ethylenes) [17].

Helical b-peptides (used as staves in barrel-stave method):

Examples of this class include: a.) cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].

Monomeric steroids:

Examples of this sub-class falling include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics [that] may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21] and supramolecular ion channels [22].

Complex minimalist systems:

Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self assemble into supramolecular pores or exhibit antibiotic activity [25].

Non-peptide macrocycles as hoops:

Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].

Peptide macrocycles as hoops and staves:

Examples of this sub-class include a.) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic b-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; b.) synthetic carriers, antibiotics (and analogs) and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. [Certain] macrocycles may act as b-sheets, possibly as staves of b-barrel-like pores [29]; c.) bioengineered pores as sensors. Covalent capturing and fragmentations [have been] observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].

Summary

Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized and refined.

Second Survey

The following selections [31] illustrate the chemical, structural and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration and experimental-verification already existent. Permission to use all the following selections and figures were obtained from the author of the source.

There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:


A: unimolecular macromolecules,
B: complex barrel-stave,
C: barrel-rosette,
D: barrel hoop, and
E: micellar supramolecules.

Cation Conducting Channels:

UNIMOLECULAR

“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].

“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono- (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation selective and voltage-dependent” [34].

“Later, Kokube et al. synthesized another channel comprising of resorcinol based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to forma dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].

“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chain as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].

“A covalently bonded macrotetracycle4 (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].

“Inorganic derivative using crown ethers have also been synthesized. Hall et. al synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45]

B STAVES:

“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D- and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].

“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel β-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].

“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].

“By attaching peptides to the octiphenyl scaffold, a β-barrel can be formed via self-assembly through the formation of β-sheet structures between the peptide chains (Figure 1.13)” [53].

“The same scaffold was used by Matile etal. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].

“Attaching the electron-poor naphthalenediimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the compleentary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].

MICELLAR

“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].

“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). (Figure 1.15) shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].

ANION CONDUCTING CHANNELS:

“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al (Figure 1.16). Oligoether chains were attached to the primary face of the β-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I-> Br-> Cl- (following Hofmeister series)” [61].

“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-“ [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].

Cellular Prosthesis: Inklings of a New Interdisciplinary Approach

The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.

However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally-Restorative Medicine is possible through taking the technologies and techniques involved in in constructing, integrating, and experimentally-verifying either a.) non-biological analogues of ion-channels & ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and b.) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neo- and prefrontal-cortex, and through iterative procedures of gradual replacement thereby achieving indefinite-longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e. design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in-vivo.

The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e. the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic and/or molecular physically-embodied) and functional emulation of biological cells.

The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the field of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in field of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e. sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps and sections of bilipid membrane) directly.

Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially-synthesized-but-chemically/structurally-equivalent biological analogues.

This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally-Restorative Medicine. I suggest the designation ‘cellular prosthesis’.

References:

[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.

[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.

[3] Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405–6435. ISSN 0040–4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:

[4] XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[5] Ibid., p. 6411.

[6] Ibid., p. 6416.

[7] Ibid., p. 6413.

[8] Ibid., p. 6412.

[9] Ibid., p. 6414.

[10] Ibid., p. 6425.

[11] Ibid., p. 6427.

[12] Ibid., p. 6416.

[13] Ibid., p. 6419.

[14] Ibid., p. 6419.

[15] Ibid., p. 6419.

[16] Ibid., p. 6419.

[17] Ibid., p. 6419.

[18] Ibid., p. 6421.

[19] Ibid., p. 6422.

[20] Ibid., p. 6422.

[21] Ibid., p. 6422.

[22] Ibid., p. 6422.

[23] Ibid., p. 6423.

[24] Ibid., p. 6423.

[25] Ibid., p. 6423.

[26] Ibid., p. 6426.

[27] Ibid., p. 6426.

[28] Ibid., p. 6427.

[29] Ibid., p. 6327.

[30] Ibid., p. 6427.

[31] XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.

[32] Ibid., p. 4.

[33] Ibid., p. 4.

[34] Ibid., p. 4.

[35] Ibid., p. 4.

[36] Ibid., p. 7.

[37] Ibid., p. 8.

[38] Ibid., p. 7.

[39] Ibid., p. 7.

[40] Ibid., p. 7.

[41] Ibid., p. 7.

[42] Ibid., p. 7.

[43] Ibid., p. 8.

[44] Ibid., p. 8.

[45] Ibid., p. 9.

[46] Ibid., p. 9.

[47] Ibid., p. 9.

[48] Ibid., p. 10.

[49] Ibid., p. 10.

[50] Ibid., p. 10.

[51] Ibid., p. 10.

[52] Ibid., p. 11.

[53] Ibid., p. 12.

[54] Ibid., p. 12.

[55] Ibid., p. 12.

[56] Ibid., p. 12.

[57] Ibid., p. 12.

[58] Ibid., p. 13.

[59] Ibid., p. 13.

[60] Ibid., p. 14.

[61] Ibid., p. 14.

[62] Ibid., p. 14.

[63] Ibid., p. 15.

[64] Ibid., p. 15.

[65] Ibid., p. 15.

[66] Cortese, F., (2013). Concepts for Functional Replication of Biological Neurons. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/conce…plication/

[67] Cortese, F., (2013). Gradual Neuron Replacement for the Preservation of Subjective-Continuity. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/gradu…placement/

[68] Cortese, F., (2013). Wireless Synapses, Artificial Plasticity, and Neuromodulation. The Rational Argumentator. Access: http://www.rationalargumentator.com/index/blog/2013/05/wireless-synapses/

[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339