Kemal Akman – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Tue, 25 Apr 2017 11:50:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 The Existential Importance of Life Extension https://lifeboat.com/blog/2011/03/the-existential-importance-of-life-extension https://lifeboat.com/blog/2011/03/the-existential-importance-of-life-extension#comments Thu, 24 Mar 2011 08:11:11 +0000 http://lifeboat.com/blog/?p=1661 The field of life extension is broad and ranges from regenerative medicine to disease prevention by nutritional supplements and phytomedicine. Although the relevance of longevity and disease prevention to existential risks is less apparent than the prevention of large-scale catastrophic scenarios, it does have a high relevance to the future of our society. The development of healthy longevity and the efficiency of modern medicine in treating age-related diseases and the question of how well we can handle upcoming issues related to public health will have a major impact on our short-term future in the next few decades. Therefore, the prospect of healthy life extension plays important roles at both a personal and a societal level.
From a personal perspective, a longevity-compatible lifestyle, nutrition and supplementary regimen may not only help us to be active and to live longer, but optimizing our health and fitness also increase our energy, mental performance and capacities for social interaction. This aids our ability to work on the increasingly complex tasks of a 21st-century world that can make a positive impact in society, such as work on existential risk awareness and problem-solving. Recently, I wrote a basic personal orientation on the dietary supplement aspect of basic life extension with an audience of transhumanists, technology advocates with a high future shock level and open-minded scientists in mind, which is available here.
On a societal level, however, aging population and public health issues are serious. A rapid increase of some diseases of civilization, whose prevalence also climbs rapidly with advanced age, is on the march. For example, Type-II-Diabetes is rapidly on its way to becoming an insurmountable problem for China and the WHO projects COPD, the chronic lung disease caused by smoking and pollution, as the third leading cause of death in 2030.
While the currently accelerating increase of diseases of civilization may not collapse society itself, the costs associated with an overaging population could significantly damage societal order, collapse health systems and impact economies given the presently insufficient state of medicine and prevention. The magnitude, urgency and broad spectrum of consequences of age-related diseases of civilization currently being on the march is captured very well in this 5-minute fact-filled presentation on serious upcoming issues of aging in our society today by the LifeStar Foundation. Viewing is highly recommended. In short, a full-blown health crisis appears to be looming over many western countries, including the US, due to the high prevalence of diseases of aging in a growing population. This may require more resources than available if disease prevention efforts are not stepped up as early as possible. In that case, the required urgent action to deal with such a crisis may deprive other technological sectors of time and resources, affecting organizations and governments, including their capacity to manage vital infrastructure, existential risks and planning for a safe and sufficient progress of technology. Hence, not caring about the major upcoming health issue by stepping up disease prevention efforts according to latest biomedical knowledge may indirectly pose challenges affecting our capabilities to handle existential risks.
It should be pointed out that not all measures aimed at improving public health and medicine need to be complex or expensive to attain, as even existing biomedical knowledge is not sufficiently applied. A major example for this is the epidemic Vitamin D deficiency of the western population which was uncovered several years ago. In the last few years, the range of diseases that Vitamin D deficiency and –therapy can influence has grown to include most cancers, diabetes, cardiovascular diseases, brain aging including Alzheimer’s disease and many infectious diseases. Ironically, Vitamin D is one of the cheapest supplements available. Moreover, correcting an existing Vitamin D deficiency, which may affect as much as 80% of western population, may cut mortality risk in half. The related mortality decrease would likely coincide with a reduced morbidity and illness of elderly people, resulting in large savings of public healthcare and hospital funds, since Vitamin D effectively prevents and treats some of the most costly age-related diseases. The Life Extension Foundation, for example, has already offered a free initial supply to the U.S. population and shown that massive healthcare costs (and many lives) could be saved if every hospitalized patient was tested for Vitamin D and/or given the supplement, however this offer was ignored by the US government. This is detailed in an article on the effects of widespread Vitamin D deficiency from the Life Extension Foundation, along with many references for the above health effects of Vitamin D at the end of that article.
To recapitulate, there are plenty of important reasons why the focus on disease prevention and regenerative medicine, by applying existing state-of-the-art biomedical knowledge, as well as advancing key areas such as stem-cell research, rejuvenation technologies and nanomedicine should be an urgent priority for advocates of existential risk management today and during the next few decades.
]]>
https://lifeboat.com/blog/2011/03/the-existential-importance-of-life-extension/feed 3
Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction#comments Fri, 25 Feb 2011 15:05:10 +0000 http://lifeboat.com/blog/?p=1599 Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011
]]>
https://lifeboat.com/blog/2011/02/security-and-complexity-issues-implicated-in-strong-artificial-intelligence-an-introduction/feed 3