Meta-materials — materials that have been engineered to have properties that absolutely do not exist in nature — such as negative refraction — are unraveling interesting possibilities in future engineering. The discovery of negative refraction has led to the creation of invisibility cloaks, for example, which seamlessly bend light and other electromagnetic radiation around an object, though such are normally restricted to cumbersome laboratory experiments with split-ring resonators and/or restricted to an insufficient slice of spectrum.
A recent article in ExtremeTech drew attention to the world’s first quantum meta-material, created recently by a team of German material scientists at the Karlsruhe Institute of Technology. It is believed such quantum meta-material can overcome the main problem with traditional meta-materials based on split-ring resonators, which can only be tuned to a small range of frequencies and not conducive to operate across a useful slice of spectrum. While fanciful applications such as quantum birefringence and super-radiant phase transitions are cited it is perhaps invisibility cloaks that until very recently seemed a forte of science fiction.
Breakthroughs at the National Tsing-Hua University in Taiwan have also made great strides in building quantum invisibility cloaks, and as the arXiv blog on TechnologyReview recently commented ‘invisibility cloaks are all the rage these days’. With such breakthroughs, these technologies may soon find mass take-up in future consumer products & security, and also have abundant military uses — where it may find the financial stimulus to advance the technology to its true capabilities. Indeed researchers in China have been looking into how to mass-produce invisibility cloaks from materials such as Teflon. We’ll all be invisible soon.
The Century-Long Challenge to Respond to Fukushima
Emanuel Pastreich (Director)
Layne Hartsell (Research Fellow)
The Asia Institute
More than two years after an earthquake and tsunami wreaked havoc on a Japanese power plant, the Fukushima nuclear disaster is one of the most serious threats to public health in the Asia-Pacific, and the worst case of nuclear contamination the world has ever seen. Radiation continues to leak from the crippled Fukushima Daiichi site into groundwater, threatening to contaminate the entire Pacific Ocean. The cleanup will require an unprecedented global effort.
Initially, the leaked radioactive materials consisted of cesium-137 and 134, and to a lesser degree iodine-131. Of these, the real long-term threat comes from cesium-137, which is easily absorbed into bodily tissue—and its half-life of 30 years means it will be a threat for decades to come. Recent measurements indicate that escaping water also has increasing levels of strontium-90, a far more dangerous radioactive material than cesium. Strontium-90 mimics calcium and is readily absorbed into the bones of humans and animals.
The Tokyo Electric Power Company (TEPCO) recently announced that it lacks the expertise to effectively control the flow of radiation into groundwater and seawater and is seeking help from the Japanese government. TEPCO has proposed setting up a subterranean barrier around the plant by freezing the ground, thereby preventing radioactive water from eventually leaking into the ocean—an approach that has never before been attempted in a case of massive radiation leakage. TEPCO has also proposed erecting additional walls now that the existing wall has been overwhelmed by the approximately 400 tons per day of water flowing into the power plant.
But even if these proposals were to succeed, they would not constitute a long-term solution.
A New Space Race
Solving the Fukushima Daiichi crisis needs to be considered a challenge akin to putting a person on the moon in the 1960s. This complex technological feat will require focused attention and the concentration of tremendous resources over decades. But this time the effort must be international, as the situation potentially puts the health of hundreds of millions at risk. The long-term solution to this crisis deserves at least as much attention from government and industry as do nuclear proliferation, terrorism, the economy, and crime.
To solve the Fukushima Daiichi problem will require enlisting the best and the brightest to come up with a long-term plan to be implemented over the next century. Experts from around the world need to contribute their insights and ideas. They should come from diverse fields—engineering, biology, demographics, agriculture, philosophy, history, art, urban design, and more. They will need to work together at multiple levels to develop a comprehensive assessment of how to rebuild communities, resettle people, control the leakage of radiation, dispose safely of the contaminated water and soil, and contain the radiation. They will also need to find ways to completely dismantle the damaged reactor, although that challenge may require technologies not available until decades from now.
Such a plan will require the development of unprecedented technologies, such as robots that can function in highly radioactive environments. This project might capture the imagination of innovators in the robotics world and give a civilian application to existing military technology. Improved robot technology would prevent the tragic scenes of old people and others volunteering to enter into the reactors at the risk of their own wellbeing.
The Fukushima disaster is a crisis for all of humanity, but it is a crisis that can serve as an opportunity to construct global networks for unprecedented collaboration. Groups or teams aided by sophisticated computer technology can start to break down into workable pieces the immense problems resulting from the ongoing spillage. Then experts can come back with the best recommendations and a concrete plan for action. The effort can draw on the precedents of the Intergovernmental Panel on Climate Change, but it must go far further.
In his book Reinventing Discovery: The New Era of Networked Science, Michael Nielsen describes principles of networked science that can be applied on an unprecedented scale. The breakthroughs that come from this effort can also be used for other long-term programs such as the cleanup of the BP Deepwater Horizon oil spill in the Gulf of Mexico or the global response to climate change. The collaborative research regarding Fukushima should take place on a very large scale, larger than the sequencing of the human genome or the maintenance of the Large Hadron Collider.
Finally, there is an opportunity to entirely reinvent the field of public diplomacy in response to this crisis. Public diplomacy can move from a somewhat ambiguous effort by national governments to repackage their messaging to a serious forum for debate and action on international issues. As public diplomacy matures through the experience of Fukushima, we can devise new strategies for bringing together hundreds of thousands of people around the world to respond to mutual threats. Taking a clue from networked science, public diplomacy could serve as a platform for serious, long-term international collaboration on critical topics such as poverty, renewable energy, and pollution control.
Similarly, this crisis could serve as the impetus to make social networking do what it was supposed to do: help people combine their expertise to solve common problems. Social media could be used not as a means of exchanging photographs of lattes and overfed cats, but rather as an effective means of assessing the accuracy of information, exchanging opinions between experts, forming a general consensus, and enabling civil society to participate directly in governance. With the introduction into the social media platform of adequate peer review—such as that advocated by the Peer-to-Peer Foundation (P2P)—social media can play a central role in addressing the Fukushima crisis and responding to it. As a leader in the P2P movement, Michel Bauwens, suggests in an email, “peers are already converging in their use of knowledge around the world, even in manufacturing at the level of computers, cars, and heavy equipment.”
Here we may find the answer to the Fukushima conundrum: open the problem up to the whole world.
Peer-to-Peer Science
Making Fukushima a global project that seriously engages both experts and common citizens in the millions, or tens of millions, could give some hope to the world after two and a half years of lies, half-truths, and concerted efforts to avoid responsibility on the part of the Japanese government and international institutions. If concerned citizens in all countries were to pore through the data and offer their suggestions online, there could be a new level of transparency in the decision-making process and a flourishing of invaluable insights.
There is no reason why detailed information on radiation emissions and the state of the reactors should not be publicly available in enough detail to satisfy the curiosity of a trained nuclear engineer. If the question of what to do next comes down to the consensus of millions of concerned citizens engaged in trying to solve the problem, we will have a strong alternative to the secrecy that has dominated so far. Could our cooperation on the solution to Fukushima be an imperative to move beyond the existing barriers to our collective intelligence posed by national borders, corporate ownership, and intellectual property concerns?
A project to classify stars throughout the university has demonstrated that if tasks are carefully broken up, it is possible for laypeople to play a critical role in solving technical problems. In the case of Galaxy Zoo, anyone who is interested can qualify to go online and classify different kinds of stars situated in distant galaxies and enter the information into a database. It’s all part of a massive effort to expand our knowledge of the universe, which has been immensely successful and demonstrated that there are aspects of scientific analysis that does not require a Ph.D. In the case of Fukushima, if an ordinary person examines satellite photographs online every day, he or she can become more adept than a professor in identifying unusual flows carrying radioactive materials. There is a massive amount of information that requires analysis related to Fukushima, and at present most of it goes virtually unanalyzed.
An effective response to Fukushima needs to accommodate both general and specific perspectives. It will initially require a careful and sophisticated setting of priorities. We can then set up convergence groups that, aided by advanced computation and careful efforts at multidisciplinary integration, could respond to crises and challenges with great effectiveness. Convergence groups can also serve as a bridge between the expert and the layperson, encouraging a critical continuing education about science and society.
Responding to Fukushima is as much about educating ordinary people about science as it is about gathering together highly paid experts. It is useless for experts to come up with novel solutions if they cannot implement them. But implementation can only come about if the population as a whole has a deeper understanding of the issues. Large-scale networked science efforts that are inclusive will make sure that no segments of society are left out.
If the familiar players (NGOs, central governments, corporations, and financial institutions) are unable to address the unprecedented crises facing humanity, we must find ways to build social networks, not only as a means to come up with innovative concepts, but also to promote and implement the resulting solutions. That process includes pressuring institutions to act. We need to use true innovation to pave the way to an effective application of science and technology to the needs of civil society. There is no better place to start than the Internet and no better topic than the long-term response to the Fukushima disaster.
Most of us know helium as that cheap inert lighter-than-air gas we use to fill party balloons and inhale to increase voice-pitch as a party trick for kids. However, helium has much more important uses to humanity — from medical (e.g. MRIs), military and defense (submarine detectors use liquid helium to clean up noisy signals), next-generation nuclear reactors, space shuttles, solar telescopes, infra-red equipment, diving, arc welding, particle physics research (the super-magnets in particle colliders rely on liquid helium), the manufacture of many digital devices, growing silicon crystals, the production of LCDs and optical fibers [1].
The principal reason helium is so important is due to its ultra-low boiling-point and inert nature making it the ultimate coolant of the human race. As the isotope helium-3, helium is also used in nuclear fusion research [2]. However, our Earth supplies of helium are being used at an unprecedented rate and could be depleted within a generation [4] and at the current rate of consumption we will run out within 25 to 30 years. As the gas is often thought of as a cheap gas it is often wasted. However, those who understand the situation, such as Prof Richardson, co-chair of a recent US National Research Council inquiry into the coming helium shortage, warn that the gas is not cheap due to the supply being inexhaustible, but because of the Helium Privatisation Act passed in 1996 by the US Congress.
Helium only accounts for 0.00052% of the Earth’s atmosphere and the majority of the helium harvested comes from beneath the ground being extracted from minerals or tapped gas deposits. This makes it one of the rarest elements of any form on the planet. However, the Act required the helium stores [4] held underground near Amarillo in Texas to be sold off at a fixed rate by 2015 regardless of the market value, to pay off the original cost of the reserve. The Amarillo storage facility holds around half the Earth’s stocks of helium: around a billion cubic meters of the gas. The US currently supplies around 80 percent of the world’s helium supplies, and once this supply is exhausted one can expect the cost of the remaining helium on Earth to increase rapidly — as this is in all practicality quite a non-renewable resource.
There is no chemical way of manufacturing helium, and the supplies we have originated in the very slow radioactive alpha decay that occurs in rocks. It has taken 4.7 billion years for the Earth to accumulate our helium reserves, which we will have exhausted within about a hundred years of the US’s National Helium Reserve having been established in 1925. When this helium is released to the atmosphere, in helium balloons for example, it is lost forever — eventually escaping into space [5][6]. So what shall we do when this crucial resource runs out? Well, in some cases liquid nitrogen (−195°C) may be adopted as a replacement — but in many cases liquid nitrogen cannot be used as a stand alone coolant as tends to be trickier to work with (triple point and melting point at around −210°C) — so the liquid helium is used because it is capable of staying liquid at the extreme cool temperatures required. No more helium means no more helium liquid (−269°C) that is used to cool the NMR (nuclear magnetic resonance apparels), and in other machines such as MRI scanners. One wonders therefore must we look towards space exploration to replenish our most rare of resources on Earth?
Helium is actually the second most abundant resource in the Universe, accounting for as much as 24 percent of the Universe’s mass [7] — mostly in stars and the interstellar medium. Mining gas giants for helium has been proposed in a NASA memorandum on the topic [8] which have also have great abundance of this gas, and it has been suggested that such atmospheric mining may be easier than mining on the surfaces of outer-planet moons. While this had focused on the possibility of mining Helium-3 from the atmosphere of Jupiter, with inherent complications of delta-V and radiation exposure, a more appropriate destination for mining regular helium may rest with the more placid ice-giant Uranus (not considered in the memorandum as the predicted concentration of Helium-3 in the helium portion of the atmosphere of Uranus is quite small). Leaving aside specific needs for Helium-3 which can be mined in sufficient volume much closer — on our Moon [9], a large-scale mining mission to Uranus for the more common non-radioactive isotope could ensure the Earth does not have to compromise so many important sectors of modern technology in the near future due to an exhaustion of our helium stock. A relatively lower wind speed (900 km/h, comparing favorably to 2,100 km/h on Neptune), with a lower G-force (surface gravity 0.886 g, escape velocity 21.3 km/s) [10] and an abundance of helium in its atmosphere (15 ± 3%) could make it a more attractive option, despite the distances (approx 20 AU), extreme cold (50-70K) and radiation belts involved. Rationalising complexities in radiation, distance, time and temperatures involved for human piloting of such a cargo craft, it could be considered more suited to an automated mission, remote-controlled under robotics similar to orbiter probes — even though this would introduce an additional set of challenges — in AI and remote control.
However, we have a Catch 22 — NASA space programs use the gas to aid their shuttles [12]. Liquid fuels are volatile. They are packed with corrosive material that could destroy a spacecraft’s casing. To avoid this problem, a craft is filled with helium gas. If this could be replaced in such shuttles with some alternative, and advances in space transportation made to significantly increase the cargo of such ships over interplanetary-distances, perhaps a case could be made for such ambitious gas mining missions, though at present given current NASA expenditure, this would seem like fantasy [13]. Realistic proposals for exploration of Uranus [14] fall far short of these requirements. Helium is a rare and unique element we need for many industrial purposes, but if we don’t conserve and recycle our helium, we are dooming mankind to a future shortage of helium, with little helium left for future generations here on Earth [15] — as for now, replenishing such from space seems like a rather long shot.
Neo-Democracy: The Evolution of the Democratic Republic
Dustin Ashley
Abstract
This essay presents a new political paradigm based upon concepts that originate from direct democracy, meritocracy, technocracy, and egalitarian ideology. I systematically redesign the common political system to where these concepts can complement each other and work as a synergistic whole. The main idea is to recreate the direct democratic system made famous by the ancient Athenians while repurposing it for use in this current era in human history and for many generations to come.
1. Introduction
Karl Marx wrote that, “The history of all hitherto existing society is the history of class struggles.”(Marx and Engels 1848) This is true in the case of many rising world powers where the rich often take advantage of the working class. For example, the American Gilded Age sets the example for what happens when laissez-faire liberalism becomes rampant. During this era, politicians set up “political machines” to keep them and whoever they’re aligned with in office for as long as they wish. This occurred while companies began to take control of single markets and created monopolies where they were able to do whatever they pleased. One major proponent of this version of free-market economy was William Graham Sumner, whose book What Social Classes Owe to Each Other (1884)agreed with laissez-faire while being against granting assistance to the poor. This type of philosophy was one major reason for the rise of plutocracy and corporatocracy that still resonates through America to this day.
To keep this from happening again, emerging nations must learn from these past follies and make sure that they aren’t repeated. In order to prevent such a system from occurring, a form of government must be set up where every person has equal opportunity to a nation’s resources while being rendered unable to usurp someone’s ability to obtain similar resources. This includes enacting a government that is based on putting people in specific offices that only deserve it by proving themselves worthy via an administered exam while solving national issues with problem solving strategies a la the scientific method. With a 21st century mindset and the aid of our finest technology, we can create a more efficient and practical form of government than before.
2. Basic Political Structure
This new political paradigm is a technologically aided form of direct democracy that consists of elements from technocracy, meritocracy, and egalitarian ideology. Its main ideology comes from Athenian democracy, where they did not vote on representatives but rather voted on their behalf. Even though they didn’t grant suffrage to women, slaves, children, and immigrants, they had no set reference regarding class and often participated in large groups. These aspects can be applied to this paradigm; in which there are no representatives and that anybody of any class can participate.
In addition, the use of technology can be used to supplement the political process and improve government to its highest state of efficiency. This includes using the Internet and enabling citizens to become more active in making decisions for their government. Such claims can be made evident by Ann Macintoch, who coined the term “E-Democracy” for the use of technology as a supplement to democracy. She states that, “E-democracy is concerned with the use of information and communication technologies to engage citizens, support the democratic decision- making processes and strengthen representative democracy.” (Macintoch 2006) Not only does this allow for a more active participation in political affairs, this can also lead to more efficient solutions to troubling problems. When technology is spliced with democracy, it is possible that democracy can evolve as technology does.
It is important that every citizen is given equal opportunity to pursue their interests without the lingering fear that something will inhibit them from achieving their goals. It is in egalitarian thought that every person deserves an equal chance, regardless of their form, ethnic background, nor intellect. This is true in both the works of Karl Marx and John Locke. John Locke states that all people were created equal and that everyone had a natural right to defend his “Life, health, Liberty, or Possessions.” (Locke 1690) On the other hand, Karl Marx believed that there should be an equal distribution of a nation’s wealth to every citizen. Even though their philosophies differ, they both had a view on egalitarianism that is still relevant today. When the wealth can be distributed equally to everyone while everybody has their ability to defend their basic human rights, there lays the key to an egalitarian society.
With the synergistic combination of egalitarianism and technological democracy, you will find technocracy. This peculiar form of government relies on a nation’s leaders to be scientists, engineers, and others with compatible skills and not politicians and businessmen. (Berndt 1982) These technocrats use the scientific method when approaching social problems rather than political or philosophical implementation. These people are voted in by who is most qualified and not by who has the most money or most connected. This form of government is partially implemented in the Communist Party of China since most of their leaders are engineers. The Five-year plans of the People’s Republic of China have enabled them to plan ahead in a technocratic fashion to build projects such as the National Trunk Highway System, the China high-speed rail system, and the Three Gorges Dam. (Andrews 1995) In implementing technocracy into a nation’s government, it is possible for the nation to become prolific and prosperous.
3. The Voting Masses
The voting masses represent every individual that is eligible to vote, for as long as they are a free person and of age to make a responsible choice. Whereas age is a subjective requirement and is open to discussion, a free individual is one that is not incarcerated. The voting masses do not have any political nor governmental responsibilities and may vote if they choose to do so. There are no requirements and they possess the majority of the political power. This is evident in their ability to influence their nation by approving or denying any laws that are presented to them. In summation, every individual has the choice to be involved in their nation’s government as much or as little as they want.
4. EDD
All new sovereigns and bills must be approved by the voting masses before such actions are enacted. This is made possible through a form of direct democracy called electronic direct democracy, or EDD. This allows for the common people to be involved in the legislative process and nullifies the necessity for a legislative branch in government. The Florida Institute of Technology is currently researching and developing the technology that supports EDD, while implementing it in their student organizations. (Kattamuri et al 2005) If proven successful, this further dissolves the need for a representative democracy while giving more power to the common people.
5. Sovereigns of the State
The Sovereigns of the State are a group of individuals who coordinate the different aspects of a nation while addressing the needs of the people. While there are numerous roles that a sovereign must fulfill, this problem can be solved by having multiple sovereigns that work together ad hoc. Each sovereign will have a different duty to fulfill and must do so in an effective and productive manner for the sake of the nation. This includes:
Sovereign of the Military:
The Sovereign of the Military, or High General, is responsible for commanding the nation’s military during times of war. The individual has the capability to address the nation and declare war but it must be approved by the voting masses for the declaration to be enacted. The High General regulates the military and makes sure that the nation is prepared for when an attack is immanent. In order to become the Sovereign of the Military, one must be an experienced soldier of high rank that understands battlefield tactics and can lead the nation during times of war.
Sovereign of the Consensus:
The Sovereign of the Consensus, or Head Chairman, plays a dormant role as a peacekeeper during times when new sovereigns are voted in. The Head Chairman also serves as a tiebreaker for when a stalemate occurs during the voting process.
Sovereign of Energy:
The Sovereign of Energy focuses on energy production and distribution while overseeing the development of more efficient energy sources.
Sovereign of Treasury:
The Sovereign of Treasury, or National Economist, focuses on financial or monetary matters and is in charge of manufacturing currency. The National Economist is responsible for formulating economic and tax policies and managing public debt. The National Economist must hold a high degree in economics and has experience in financial matters.
Sovereign of Education:
The Sovereign of Education, or National Educator, is responsible for education policies in public schools and institution accreditation. The National Educator must have a degree in education with experience in teaching at both public schools and universities.
Sovereign of Foreign Affairs:
The Sovereign of Foreign Affairs, or Chief Diplomat, is responsible for maintaining stable relations with other nations and other diplomatic duties. The Chief Diplomat is also responsible for issues pertaining to foreign policy. In order to become eligible for this position, one must have experience with matters dealing with diplomacy and foreign affairs.
Sovereign of Labour:
The Sovereign of Labour enforces laws involving unions, the workplace, and any business-person interactions. This also includes maintaining minimal unemployment within the nation.
Sovereign of National Affairs:
The Sovereign of National Affairs is responsible for issues pertaining to land management, landmark preservation, natural disaster response, immigration policies, and law enforcement policies.
Sovereign of Human Services:
The Sovereign of Human Services, or Head Physician, is responsible for issues concerning disease control, advancement in medical technologies, final approval of pharmaceutical drugs and medicines, food safety and management, nutrition, and welfare. To be eligible for this position, the aspirant must have a medical degree and experience in the medical field.
Judicial Sovereign:
The Judicial Sovereign is responsible for reviewing all bills before they are enacted as laws. This includes making sure they do not go against the principles written down in the nation’s primary social contract i.e. the constitution. The Judicial Sovereign also serves as Head Judge during trials that are considered high crimes, such as murder and fraud. To be eligible for this position, the applicant must be already a licensed attorney and/or judge with experience in legal matters.
These sovereigns can only be placed into office by merit alone and not placement within the community. This is done by giving them time to place distribute a list of their accomplishments and their criminal record. During this time period, the voting masses can decide who they believe is fit for the job. These actions are to ensure that the voting masses are voting into office those whom they think are fit for the positions and not by “popular vote”. To further ensure that the applicants are not committing acts of fraud, their paperwork is first reviewed by a group of volunteers that can verify the authenticity of the applicants and their paperwork. The identity of the volunteers is kept anonymous to ensure that they cannot be bribed or intimidated by the applicants. The volunteers form a discipline-specific administration system and are not under the influence of any focus group. In order to be selected, they must show that they are experts in their selected field and are not already under any influence.
6. Judicial System Within The Political System
In a governmental sense, the judicial system is used to declare whether a bill is protected by the nation’s social contract or if it goes against. Typically, if a bill goes against the social contract then it will be vetoed and terminated. The judicial branch serves as a “political buffer” between the legislative and executive branches. This gives the leaders within the judicial branch much power. In the case for this framework, the judicial branch works as a mediator between the voting masses and the sovereigns. To keep matters fair, the members of the judicial branch are to be impartial and fair towards both sides.
7. Conclusion
This new political paradigm serves only as a framework for any political system and not as a system in itself. It can be modified, expanded, or condensed as needed as long as the main idea is not lost. This may serve as the next step in constructing a new political system based on progressive thought and pro-technology ideology. Whether it serves as a theoretical concept or someone applies these ideas to their organization, this concept is meant for anyone to read.
Works Cited
Marx, K., and Engels, F. 1848. The Communist Manifesto
Sumner, W.G. 1884. What Social Classes Owe to Each Other
Machintoch, A. 2006. Characterizing E-Participation in Policy-Making
Locke, J. 1690. Second Treatise of Government
Berndt, E.R. 1982. From Technocracy to Net Analysis: Engineers, Economists, And Recurring Energy Theories of Value. Studies in Energy and the American Economy, Discussion Paper No. 11
Andrews, J. 1995. Rise of the Red Engineers
Kattamuri, S. et. al. 2005. Supporting Debates Over Citizen Initiatives
Originally posted as Part II of a four-part introductory series on Bitcoin on May 7, 2013 in the American Daily Herald. See the Bitcoin blog for all four articles.
The emergence of money and its importance in enabling trade between people has been well researched and documented in the literature of the Austrian School of economics – Theory of Money and Credit by Ludwig von Mises and Man, Economy and State by Murray N. Rothbard being prime examples. The contribution of the Austrian greats to the understanding of money and its origin made clear exactly what money is (e.g. the most marketable commodity), the different types of media that are employed in exchange between people (e.g. commodity money, credit money, fiat money and money substitutes) and a theoretical explanation for their origin (the Regression Theorem). The Austrian School has also given arguably the most convincing analysis of the relationship between the money type in use, the manner by which it is controlled and the business cycle – emphasizing the importance of sound money. But except for a few sparse outliers, what the Austrian School has yet to do is fully recognize Bitcoin as a valid scholarly and academic topic. With this article, I hope to contribute to its recognition.
Money’s characteristics
Money enabled people in early stages of civilization to go from direct exchange, with difficulties such as the double-coincidence of wants, to indirect exchange. This improved mechanism paved the way for facilitating man’s specialization in his tasks, thereby enabling division of labor within society since each specialized laborer was able to trade his goods for others indirectly with the use of a medium of exchange. Money has taken many forms but there are certain characteristics all forms should have. Aristotle, for instance provided the following four:
Durable – The item must remain usable and retain its characteristics, for which it is valued, over long periods of time (e.g. shouldn’t fade, corrode, rot, etc).
Portable – One should be able to carry it upon their person. A related point is that it would be desirable to have a high value per unit weight, making large quantities portable too.
Divisible – By having uniformity of quality or homogeneity, the item should retain its characteristics when divided into smaller parts or when recombined to a larger unit. Thus, a similar point is the fungibility of the item, meaning that the units can be substituted for one another.
Intrinsically Valuable – The intended meaning is that it should have value as a commodity regardless of its property as money, although as I argued in a previous article, value is subjective and therefore extrinsic to the item, so it cannot in itself be intrinsically valuable. A related point is that the item, ideally, would be rare and certainly not subject to unlimited reproducibility – meaning it should be scarce.
Though Aristotle did not specifically mention fungibility, scarcity or other points such as recognizability, stability of supply, malleability etc., these points generally cover the qualities of good money. The fact that there are monies out there (e.g. fiat money) that so blatantly lack an important characteristic (e.g. not being subject to infinite reproducibility) makes the Mises Regression Theorem so interesting, in that it explains how such a money came about.
Man’s desire for convenience
Mises defined money, in its narrower sense, as taking three forms: commodity money; credit money; and fiat money. In its broader sense, money substitutes like fiduciary media are also used. Of all these forms of money, the most convenient are fiat money, credit money and money substitutes. These forms can be represented by pieces of paper (e.g. banknotes or contract) and therefore, as long as there is trust in the issuing entity or in the counterparty, these monetary forms will be accepted ‘as good as’ the money that backs them or the money that is promised in the contract. Banknotes, token money and the like stemmed from the fact that the common man did not want to store large amounts of precious commodities in his home nor carry it on his person. Banks stored the commodities and issued redeemable notes instead. Let’s face it, humans choose the path of least resistance and so convenience is desirable.
The unfortunate situation that arose is that when banks (or their ‘money warehouse’ predecessors) realized that not everyone wants all of their stored gold at once, they started issuing multiple banknotes backed by the same unit of money stored. This fraud has become pervasive and eventually legally licensed by the state. So while ‘hard currencies’ are good, their lack of convenience has led, as a matter of historical fact, to fractional reserve banking. This practice and the expansion of the monetary base introduce anomalies into the economy and bring about the business cycle.
Another aspect that is inherent to commodity money (and most all other money types) is that the payment system has always been separate from money. Whether carrying a bag of coins in one’s pocket or arranging for an armored van, payment requires delivery of money. Banks and clearing houses took on the role to perform this service, charging lucrative transaction fees in the process. Here, too, it became more convenient to use credit money or internet banking, where one just transfers the information about the transaction and where it is just as convenient regardless of the sum involved. No physical asset can be transferred instantaneously and without effort.
As desirable as physical commodities such as gold and silver are, the fact that they become increasingly less convenient the more you have of them has turned out to be their Achilles’ heel.
Division of labor and specialization of tools
As we can see from the above arguments, while commodity money has been the soundest of options, it is not without its flaws. But, remember, this is not an unusual phenomenon. Being self-sufficient and growing one’s own food is also prudent, yet most will concur it has its disadvantages. Humans have discovered that division of labor and specialization makes everyone better off. Specialization, though, is most effective when the tools one uses are also custom-made for the task at hand. Imagine using a gardening trowel as a ladle for your soup, or a battle axe as a butcher knife… This is a facetious comment, to be sure, but why then must we ‘make do’ with an ornamental commodity or a block of highly conductive metal as money? Humans once used flint to start fires because that is what nature provided. Surely, we agree a lighter is much better. Why should we not seek to invent a tool to facilitate monetary transactions, call it money, which would cover the characteristics noted above (Aristotelian or others) as ideally as can be? Then just set it free and see if it acquires value through a catallactic process, much like gold and silver did in the past. As Rothbard said in relation to gold: “If gold, after being established as money, were suddenly to lose its value in ornaments or industrial uses, it would not necessarily lose its character as a money”. If one invents money and it establishes itself, who cares if it has no other purpose?
Whether for the reason of making a more perfect money or just to make a digital form of it, an unknown hacker (or group of hackers) brilliantly devised a new money — Bitcoin. We see that it has already acquired some value and a quick search will show an ever increasing number of businesses willing to trade in Bitcoin. It is already a medium of exchange for a growing number of countercultures. Whether it continues to gather momentum is an empirical question, one for which only time has the answer. But let us not forget this is a free market phenomenon. Nothing about its ownership, mining or its use violates private property rights. As with any good on a truly free market – the only test it must withstand is the test of marketability and popularity within the confines of the non-aggression principle and private property rights.
But does it serve customers’ needs?
By far the best and most academically rigorous description of Bitcoin I’ve seen has been given by Peter Šurda in his Master’s thesis. Konrad Graf has also written extensively on the subject with clarity and insight. I will not do justice to arguments they put forward, but will share their opinion that Bitcoin has superior qualities as it relates to the characteristics of money.
Durable – Bitcoin can exist in any number of forms, be it physical or intangible (yes, you can actually have a Bitcoin coin or card). It can be printed on paper or committed to memory. But at its core, it is abstract and can be made to be as secure as the network it depends upon. Its peer-to-peer nature makes it all but impossible for governments to shut down.
Portable – If it exists in its intangible form, there is nothing more portable than 1s and 0s. A million bitcoins weigh as much as a millionth of one. It is also the most easily transportable good – no shipping costs, insurance, etc. It is, after all, its own payment system. In fact, it is so portable, you can carry backup copies with you or trusted parties, hidden in USB keys and on anonymous servers – this is the only form of money that could pose an insurmountable challenge to those wishing to confiscate your money.
Divisible – Each coin is divisible into 100 million smaller units, meaning that even if each bitcoin rises to $1 million each, we would still have the equivalent of a penny. Likewise, Bitcoin is perfectly fungible.
Scarce (Intrinsically Valuable) – Bitcoin is rare (total quantity will not exceed 21 million units) and is not subject to unlimited reproducibility. This is by design and due to its complete decentralization, there is no one entity that can override this characteristic.
Šurda additionally showed how it was superior in logistics, manipulation, authentication, transaction costs of property rights, counter-party risk, and others. Graf noted its superiority also in purchasing power and stability of supply, lending itself to become a catalyst for deflation (in the good way). And since Bitcoin isn’t a raw material in creating other products, premiums aren’t charged for industry use. Therefore, no other lower-order goods’ costs of production are affected by its potentially ever-increasing value in a deflationary environment.
Best of all, this is a commodity money that does not need money substitutes and it doubles as its own payment system. This leads to two very desirable outcomes. Banks would no longer be needed as ‘money warehouses’. Individuals could store their own bitcoins, much like they store their cat photos on their hard drives or online. Many libertarians wonder how to make fractional reserve banking illegal. You would not need to – it would come about naturally since this ‘service’ would no longer be required. Banks’ role in the business of transferring money would dwindle as well, since paying in Bitcoin is as easy as sending an email. Transaction fees and capital controls would become a thing of the past. Banks would therefore revert to providing useful services, such as pairing up those who want a loan with those who have money to lend. They would finally be forced to innovate, same as businesses across the spectrum have been doing since the dawn of civilization.
Conclusion
Bitcoin has the making of becoming money in its own right. As of now it is a medium of exchange for a limited group of individuals, but it has already acquired value and is already being purchased for its exchange value. Bitcoin is a free market phenomenon. The value it has was not forced upon anyone and its use is not protected by legal decree.
Professor Hoppe notes the following: “Economic theory has nothing to say as to what commodity will acquire the status of money. Historically, it happened to be gold. But if the physical make-up of our world would have been different or is to become different from what it is now, some other commodity would have become or might become money. The market will decide.”
What becomes money is indeed an empirical question that we can only analyze with hindsight. From the great work done by the economists quoted above, we can uncover another empirical fact – that money has arisen in a spontaneous manner, through the evolution of successive generations of human actors. But let us not conclude that money can only arise spontaneously – it can be purposefully invented and then left to the market to adopt or reject. What we are witnessing is the adoption of a new invented form of money. Money, after all, is a tool to facilitate economic transactions. We must accept – and build into our theories – the possibility of using a tool that is custom-built for this purpose. We must not merely denounce a form of money because its building blocks are not naturally provided or because it does not have other uses. If it meets and exceeds all of the characteristics of money, if it adheres to principles of economic scarcity and decentralization, and if actors on the free market see the value in it and freely exchange goods and services for it – we need to accept this, too, as having potential of being included in our books and scholarly articles alongside the time-honored alternatives. Let us have academic debates about its practical or economic merits and flaws. This is not a winner-take-all situation. Competition in currencies is just as valuable as competition in other areas. Let’s just remember that Bitcoin could well help us achieve a better and freer society with a sounder economy.
My paper “New Evidence, Conditions, Instruments & Experiments for Gravitational Theories” was finally published by the Journal of Modern Physics, Vol. 8A, 2013. That is today Aug 26, 2013.
Over the last several years I had been compiling a list of inconsistencies in modern contemporary physics. This paper documents 12 inconsistencies. If I’m correct there will sooner or later, be a massive rewrite of modern physical theories, because I do not just criticize contemporary theories but critique them, i.e. provide positive suggestions based on empirical data, on how our theories need to be modified.
The upshot of all this is that I was able to propose two original, new experiments, never before contemplated in physics journals. Both involve new experimental devices, and one is so radically new that it is unthinkable. This is the gravity wave *telescope*.
The new physics lends itself to a new and different forms of weaponizations achievable within the next few decades, with technologies *not* predicted in science fiction. How about that?
I have deliberately left this weaponization part vague because I want to focus on the propulsion technologies. Definitely not something string or quantum-gravity theories can even broach.
We will achieve interstellar travel in my lifetime, and my paper points to where to research this new physics and new technologies.
Paper Details:
Title: New Evidence, Conditions, Instruments & Experiments for Gravitational Theories
News this past week on Fukushima has not been exactly reassuring has it. Meanwhile the pro-Nuclear lobby keep counting bananas. Here I’ve gathered together some of the recent news articles on the unfolding crisis. Interested to hear some comments on this one.
The arXiv blog on MIT Technology Review recently reported a breakthrough ‘Physicists Discover the Secret of Quantum Remote Control’ [1] which led some to comment on whether this could be used as an FTL communication channel. In order to appreciate the significance of the paper on Quantum Teleportation of Dynamics [2], one should note that it has already been determined that transfer of information via a quantum tangled pair occurs *at least* 10,000 times faster than the speed of light [3]. The next big communications breakthrough?
In what could turn out to be a major breakthrough for the advancement of long-distance communications in space exploration, several problems are resolved — where if a civilization is eventually established on a star system many light years away, for example, such as on one of the recently discovered Goldilocks Zone super-Earths in the Gliese 667C star system, then communications back to people on Earth may after all be… instantaneous.
However, implications do not just stop there either. As recently reported in The Register [5], researchers in Israel at the University of Jerusalem, have established that quantum tangling can be used to send data across both TIME AND SPACE [6]. Their recent paper entitled ‘Entanglement Between Photons that have Never Coexisted’ [7] describes how photon-to-photon entanglement can be used to connect with photons in their past/future, opening up an understanding into how one may be able to engineer technology to not just communicate instantaneously across space — but across space-time.
Whilst in the past many have questioned what benefits have been gained in quantum physics research and in particular large research projects such as the LHC, it would seem that the field of quantum entanglement may be one of the big pay-offs. Whist it has yet to be categorically proven that quantum entanglement can be used as a communication channel, and the majority opinion dismisses it, one can expect much activity in quantum entanglement over the next decade. It may yet spearhead the next technological revolution.
In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.
In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e. ion-channels ion-pumps and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e. lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension however, and the field is to my knowledge yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion-Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purpose of the indefinite-longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.
While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 70’s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically-embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.
The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity and permeability throughout the 1980’s, 1990’s and 2000’s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and in lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically-active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].
Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpse of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to, rather than replace existing biological neurons, become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion general had already been conceived (and successfully created/experimentally-verified, though presumably not integrated in-vivo).
The field of Functionally-Restorative Medicine (and the orphan sub-field of whole-brain-gradual-substrate-replacement, or “physically-embodied” brain-emulation if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels and ion pumps) by MEMS (micro-electrocal-mechanical-systems) or more likely NEMS (nano-electro-mechanical systems).
The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g. bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e. dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological and experimental infrastructures developed for the fields of Synthetic
Ion-Channels and Ion-Channel/Artificial-Membrane-Reconstitution can be utilized for the purpose of a.) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional & operational modality to their normal biological counterparts) and/or b.) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.
Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in-vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular-prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in-vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.
A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.
For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion-Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.
First Survey
Unimolecular ion channels:
Examples include a.) synthetic ion channels with oligocrown ionophores, [5] b.) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and c.) modified varieties of the b-helical scaffold of gramicidin A [7]
Barrel-stave supramolecules:
Examples of this general class falling include avoltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].
Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:
Examples of this sub-class include synthetic ion channels formed by a.) macrocyclic, branched and linear bolaamphiphiles and dimeric steroids, [9] and by b.) non-peptide macrocycles, acyclic analogs and peptide macrocycles [respectively] containing abiotic amino acids [10].
Dimeric steroid staves:
Examples of this sub-class include channels using polydroxylated norcholentriol dimer [11].
pOligophenyls as staves in rigid rod b barrels:
Examples of this sub-class include “cylindrical self-assembly of rigid-rod b-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].
Synthetic Polymers:
Examples of this sub-class include synthetic ion channels and pores comprised of a.) polyalanine, b.) polyisocyanates, c.) polyacrylates, [13] formed by i.) ionophoric, ii.) ‘smart’ and iii.) cationic polymers [14]; d.) surface-attached poly(vinyl-n-alkylpyridinium) [15]; e.) cationic oligo-polymers [16] and f.) poly(m-phenylene ethylenes) [17].
Helical b-peptides (used as staves in barrel-stave method):
Examples of this class include: a.) cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].
Monomeric steroids:
Examples of this sub-class falling include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics [that] may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21] and supramolecular ion channels [22].
Complex minimalist systems:
Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self assemble into supramolecular pores or exhibit antibiotic activity [25].
Non-peptide macrocycles as hoops:
Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].
Peptide macrocycles as hoops and staves:
Examples of this sub-class include a.) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic b-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; b.) synthetic carriers, antibiotics (and analogs) and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. [Certain] macrocycles may act as b-sheets, possibly as staves of b-barrel-like pores [29]; c.) bioengineered pores as sensors. Covalent capturing and fragmentations [have been] observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].
Summary
Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized and refined.
Second Survey
The following selections [31] illustrate the chemical, structural and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration and experimental-verification already existent. Permission to use all the following selections and figures were obtained from the author of the source.
There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:
“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].
“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono- (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation selective and voltage-dependent” [34].
“Later, Kokube et al. synthesized another channel comprising of resorcinol based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to forma dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].
“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chain as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].
“A covalently bonded macrotetracycle4 (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].
“Inorganic derivative using crown ethers have also been synthesized. Hall et. al synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45]
B STAVES:
“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D- and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].
“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel β-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].
“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].
“By attaching peptides to the octiphenyl scaffold, a β-barrel can be formed via self-assembly through the formation of β-sheet structures between the peptide chains (Figure 1.13)” [53].
“The same scaffold was used by Matile etal. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].
“Attaching the electron-poor naphthalenediimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the compleentary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].
MICELLAR
“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].
“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). (Figure 1.15) shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].
ANION CONDUCTING CHANNELS:
“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al (Figure 1.16). Oligoether chains were attached to the primary face of the β-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I-> Br-> Cl- (following Hofmeister series)” [61].
“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-“ [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].
Cellular Prosthesis: Inklings of a New Interdisciplinary Approach
The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.
However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally-Restorative Medicine is possible through taking the technologies and techniques involved in in constructing, integrating, and experimentally-verifying either a.) non-biological analogues of ion-channels & ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and b.) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neo- and prefrontal-cortex, and through iterative procedures of gradual replacement thereby achieving indefinite-longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e. design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in-vivo.
The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e. the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic and/or molecular physically-embodied) and functional emulation of biological cells.
The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the field of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in field of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e. sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps and sections of bilipid membrane) directly.
Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially-synthesized-but-chemically/structurally-equivalent biological analogues.
This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally-Restorative Medicine. I suggest the designation ‘cellular prosthesis’.
References:
[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.
[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.
[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339
Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec[2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.
Loaded Uploads:
Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall[3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.
The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.
Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.
The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.
Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.
Mind Uploading is (Largely) Independent of Software Performance:
Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.
This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.
If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.
Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.
Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.
If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.
The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.
This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design
So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?
There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.
If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.
Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.
This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.
Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.
However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.
Could Upload+AGI be easier to implement than AGI alone?
This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”
If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.
Virtual Advantage:
The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.
There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.
Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.
These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.
But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.
It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.
Increasing the Imminence of an Intelligent Explosion:
So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).
He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.
Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.
Intimations of Implications:
So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.
People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.
Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.
On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.
I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.
Conclusion:
Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:
How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!
References:
[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.
[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].
[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: https://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].
[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]