February 2011 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 05 Jun 2017 03:30:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.1 Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction#comments Fri, 25 Feb 2011 15:05:10 +0000 http://lifeboat.com/blog/?p=1599 Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011
]]>
https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction/feed 3
The Global Brain and its role in Human Immortality https://lifeboat.com/blog/2011/02/the-global-brain-and-its-role-in-human-immortality https://lifeboat.com/blog/2011/02/the-global-brain-and-its-role-in-human-immortality#comments Thu, 17 Feb 2011 13:25:28 +0000 http://lifeboat.com/blog/?p=1586 It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

]]>
https://lifeboat.com/blog/2011/02/the-global-brain-and-its-role-in-human-immortality/feed 1
Mixed Messages: Tantrums of an Angry Sun https://lifeboat.com/blog/2011/02/mixed-messages-tantrums-of-an-angry-sun https://lifeboat.com/blog/2011/02/mixed-messages-tantrums-of-an-angry-sun#comments Thu, 10 Feb 2011 00:56:44 +0000 http://lifeboat.com/blog/?p=1540 When examining the delicate balance that life on Earth hangs within, it is impossible not to consider the ongoing love/hate connection between our parent star, the sun, and our uniquely terraqueous home planet.

On one hand, Earth is situated so perfectly, so ideally, inside the sun’s habitable zone, that it is impossible not to esteem our parent star with a sense of ongoing gratitude.  It is, after all, the onslaught of spectral rain, the sun’s seemingly limitless output of charged particles, which provide the initial spark to all terrestrial life.

Yet on another hand, during those brief moments of solar upheaval, when highly energetic Earth-directed ejecta threaten with destruction our precipitously perched technological infrastructure, one cannot help but eye with caution the potentially calamitous distance of only 93 million miles that our entire human population resides from this unpredictable stellar inferno.

On 6 February 2011, twin solar observational spacecraft STEREO aligned at opposite ends of the sun along Earth’s orbit, and for the first time in human history, offered scientists a complete 360-degree view of the sun.  Since solar observation began hundreds of years ago, humanity has had available only one side of the sun in view at any given time, as it slowly completed a rotation every 27 days.  First launched in 2006, the two STEREO satellites are glittering jewels among a growing crown of heliophysics science missions that aim to better understand solar dynamics, and for the next eight years, will offer this dual-sided view of our parent star.

In addition to providing the source of all energy to our home planet Earth, the sun occasionally spews from its active regions violent bursts of energy, known as coronal mass ejections(CMEs).  These fast traveling clouds of ionized gas are responsible for lovely events like the aurorae borealis and australis, but beyond a certain point have been known to overload orbiting satellites, set fire to ground-based technological infrastructure, and even usher in widespread blackouts.

CMEs are natural occurrences and as well understood as ever thanks to the emerging perspective of our sun as a dynamic star.  Though humanity has known for centuries that the solar cycle follows a more/less eleven-year ebb and flow, only recently has the scientific community effectively constellated a more complete picture as to how our sun’s subtle changes effect space weather and, unfortunately, how little we can feasibly contend with this legitimate global threat.

The massive solar storm that occurred on 1 September 1859 produced aurorae that were visible as far south as Hawai’i and Cuba, with similar effects observed around the South Pole.  The Earth-directed CME took all of 17 hours to make the 93 million mile trek from the corona of our sun to the Earth’s atmosphere, due to an earlier CME that had cleared a nice path for its intra-stellar journey.  The one saving grace of this massive space weather event was that the North American and European telegraph system was in its delicate infancy, in place for only 15 years prior.  Nevertheless, telegraph pylons threw sparks, many of them burning, and telegraph paper worldwide caught fire spontaneously.

Considering the ambitious improvements in communications lines, electrical grids, and broadband networks that have been implemented since, humanity faces the threat of space weather on uneven footing.  Large CME events are known to occur around every 500 years, based on ice core samples measured for high-energy proton radiation.

The CME event on 14 March 1989 overloaded the HydroQuebec transmission lines and caused the catastrophic collapse of an entire power gird.  The resulting aurorae were visible as far south as Texas and Florida.  The estimated cost was totaled in the hundreds of million of dollars.  A later storm in August 1989 interfered with semiconductor functionality and trading was called off on the Toronto stock exchange.

Beginning in 1995 with the launch and deployment of The Solar Heliospheric Observatory (SOHO), through 2009 with the launch of SDO, the Solar Dynamics Observatory, and finally this year, with the launch of the Glory science mission, NASA is making ambitious, thoughtful strides to gain a clearer picture of the dynamics of the sun, to offer a better means to predict space weather, and evaluate more clearly both the great benefits and grave stellar threats.

Earth-bound technology infrastructure remains vulnerable to high-energy output from the sun.  However, the growing array of orbiting satellites that the best and the brightest among modern science use to continually gather data from our dynamic star will offer humanity its best chance of modeling, predicting, and perhaps some day defending against the occasional outburst from our parent star.

Written by Zachary Urbina, Founder Cozy Dark

]]>
https://lifeboat.com/blog/2011/02/mixed-messages-tantrums-of-an-angry-sun/feed 10
GC Lingua Franca(s) https://lifeboat.com/blog/2011/02/gc-lingua-francas https://lifeboat.com/blog/2011/02/gc-lingua-francas#comments Tue, 08 Feb 2011 16:09:10 +0000 http://lifeboat.com/blog/?p=1528 This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

]]>
https://lifeboat.com/blog/2011/02/gc-lingua-francas/feed 2
Human Biological Immortality in 50 years https://lifeboat.com/blog/2011/02/human-biological-immortality-in-50-years https://lifeboat.com/blog/2011/02/human-biological-immortality-in-50-years#comments Tue, 01 Feb 2011 19:22:37 +0000 http://lifeboat.com/blog/?p=1515 I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA   (organic life — molecules: billions of years)      

2. Neuron   (effective cells: millions of years)

3. Brain   (complex organisms — Homo sapiens: thousands of years)          

4. Society   (formation of effective societies: several centuries)         

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ

]]>
https://lifeboat.com/blog/2011/02/human-biological-immortality-in-50-years/feed 2