Toggle light / dark theme

This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA (organic life — molecules: billions of years)

2. Neuron (effective cells: millions of years)

3. Brain (complex organisms — Homo sapiens: thousands of years)

4. Society (formation of effective societies: several centuries)

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Black holes are different than previously thought and still presupposed by experimentalists. It is much as it was with the case with the Eniwetak hydrogen bomb test where incorrect physical calculations caused a catastrophe — fortunately a localized one at the time. Four Tubingen theorems (gothic-R theorem, TeLeMaCh theorem, miniquasar theorem, superfluidity theorem) entail 10 new consequences:

1) Black holes DO NOT EVAPORATE — hence they can only grow.

2) Artificial black holes generated at the LHC thus are undetectable at first.

3) Black holes are uncharged, so the faster majority pass right through the earth’s and the sun’s matter.

4) Only the slowest artificial ones — below 11 km/sec — will stay inside earth.

5) Inside matter, a resident black hole will not grow linerally but rather — via self-organization — form a so-called “miniquasar”: an electro-gravitational engine that grows exponentially, hence shrinking the earth to 2 cm in a few years time.

6) Since black holes are uncharged, charged elementary particles conversely can no longer be maximally small (“point-shaped”). Hence space is “bored open” in the small as predicted by the string and loop theories.

7) Therefore, the probability of black holes being generated by the LHC experiment is heavily boosted up to about 10 percent at the energy of 7 and (planned soon) 8 TeV.

8) This high probability was apparently not yet reached in 2010, since the originally planned cumulative luminosity was not achieved. But the higher-energetic second phase of proton collisions, scheduled to start in February 2011, is bound to reach that level.

9) Black holes produced in natural particle collisions (cosmic ray protons colliding with surface protons of celestial bodies including earth) are much too fast to get stuck inside matter and hence are innocuous.

10) The only exception is ultra-dense neutron stars. However, their super-fluid “core” is frictionless by virtue of quantum mechanics. Ultra-fast mini black holes that get stuck in the “crust” can grow there only to a limited weight before sinking into the core — where they stop growing. Hence the empirical persistence of neutron stars is NOT a safety guarantee as CERN claims.

MAIN QUESTION: Why do the CERN representatives disregard the above results? (Ten possible reasons)

1, The novelty of those results.

2, The limited dissemination of the above results. So far, only three pertinent papers have appeared in print, two in conference proceedings in July 2008 and one in an online science journal in 2010. CERN never quoted these results sent to it first as preprints, in its “safety reports” (never updated for two and a half years). The more recent relevant results are still confined to the Internet.

3, The a priori improbability that several results stemming from independent areas of science would “conspire” to form a threat rather than cancel out in this respect. There seems to be no historical precedent for this.

4, The decades-long intervals between new results in general relativity make sure that new findings meet with maximum skepticism at first.

5, One finding — the unchargedness result (Ch in TeLeMaCh) — dethrones a two centuries old physical law, that of charge conservation.

6, The fact that the large planetary community of string theorists suddenly hold an “almost too good” result in their hands paradoxically causes them to keep a low profile rather than triumph.

7, The waned spirit of progress in fundamental physics after its results too often proved to be “Greek gifts.”

8, The LHC experiment is the largest and most tightly knit collective scientific effort of history.

9, A fear to lose sponsors and political support for subsequent mega-projects if admitting a potential safety gap.

10, The world-wide adoption of high-school type undergraduate curricula in place of the previous self-responsible style of studying, which has the side effect that collective authority acquires an undue weight.

SOCIETY’S FAILURE

Why has the “scientific safety conference,” publicly demanded on April 18, 2008, not been taken up by any grouping on the planet? Nothing but FALSIFICATION of the presented scientific results was and is being asked. Falsification of a single one will wipe out the danger. A week of discussing might suffice to reach a consensus.

Neither politics nor the media have realized up until now that not a single visible scientist on the planet assumes responsibility for the alleged falsity of the results presented. in particular, no individual stands up to defend his disproved counterclaims (the number of specialists who entered the ring in the first place can be counted on one hand). This simple fact — not a single open adversary — escaped the attention of a media person or politician up until now.

Neither group dares confront a worldwide interest lobby even though it is not money for once that is at stake but only borrowed authority. Almost so as if the grand old men of science of the 20th century had left no successors nor had the gifted philosophers and writers (I exempt Paul Virilio). Bringing oneself up-to-date on a given topic paradoxically seems impaired in the age of the Internet.

Thus there are no culprits? None except for myself who wrongly thought that painful words (like “risk of planetocaust”) could have a wake-up effect at the last moment. The real reason for the delayed global awakening to the danger may lie with this communication error made by someone who knows how it is to lose a child. In the second place, my personal friends Lorenz, von Weizsacker, Wheeler and DeWitt are no longer among us.

CONCLUSIONS

I therefore appeal to the above called-upon high legal and political bodies to rapidly rule that the long overdue scientific safety conference take place before the LHC experiment is allowed to resume in mid-February 2011. Or in the case of a delay of the conference beyond that date, to prohibit resumption of the experiment before the
conference has taken place.

I reckon with the fact that I will make a terrible fool of myself if at long last a scientist succeeds in falsifying a single one of the above 10 scientific findings (or 4 theorems). This is my risk and my hope at the same time. I ask the world’s forgiveness for my insisting that my possibly deficient state of knowledge be set straight before the largest experiment of history can continue.

However, the youngest ship’s boy in the crow’s nest who believes he recognizes something on the horizon has the acknowledged duty to insist on his getting a hearing. I humbly ask the high bodies mentioned not to hold this fact against me and to rule in accordance with my proposition: First clarification, then continuation. Otherwise, it would be madness even if in retrospect it proved innocuous. Would it not?

Sincerely yours,

Otto E. Rössler, Chaos Researcher
2011/01/14
(For J.O.R.)

The UK’s Observer just put out a set of predictions for the next 25 years (20 predictions for the next 25 years). I will react to each of them individually. More generally, however, these are the kinds of ideas that get headlines, but they don’t constitute good journalism. Scenario planning should be used in all predictive coverage. It is, to me, the most honest way to admit not knowing and documenting the uncertainties of the future—the best way to examine big issues through different lenses. Some of these predictions may well come to pass, but many will not. What this article fails to do, is inform the reader about the ways the predictions may vary from the best guess, and what the possible alternatives may be—and where they simply don’t know.

1. Geopolitics: ‘Rivals will take greater risks against the US’

This is a pretty non-predictive prediction. America’s rivals are already challenging its monetary policy, human rights stances, shipping channels and trade policies. The article states that the US will remain the world’s major power. It does not suggest that Globalization could fracture the world so much that regional powers huddle against the US in various places, essentially creating stagnation and a new localism that causes us to reinvent all economies. It also does not foresee anyone acting on the water rights, food, energy or nuclear proliferation. Any of those could set off major conflicts that completely disrupt our economic and political models, leading to major resets in assumptions about the future.

2. The UK economy: ‘The popular revolt against bankers will become impossible to resist’

British banks will not fall without taking much of the world financial systems with them. I like the idea of the reinvention of financial systems, though I think it is far too early to predict their shape. Banking is a major force that will evolve in emergent ways. For scenario planners, the uncertainty is about the fate of existing financial systems. Planners would do well to imagine multiple ways the institution of banking will reshape itself, not prematurely bet on any one outcome.

3. Global development: ‘A vaccine will rid the world of AIDS’

We can only hope so. Investment is high, but it is not the major cause of death in the world. Other infectious and parasitic diseases still outstrip HIV/AIDS by a large margin, while cardiovascular diseases and cancer even eclipse those. So it is great to predict the end of one disease, but the prediction seems rather arbitrary. I think it would be more advantageous to rate various research programs against potential outcomes over the next 25 years and look at the impact of curing those diseases on different parts of the world. If we tackle, for instance, HIV/AIDS and malaria and diarrhea diseases, what would that do to increase the safety of people in Africa and Asia? What would the economic and political ramifications be? We also have to consider the cost of the cure and the cost of its distribution. Low cost solutions that can easily be distributed will have higher impact than higher cost solutions that limit access (as we have with current HIV/AIDS treatments) I think we will see multiple breakthroughs over the next 25 years and we would do well to imagine the implications of sets of those, not focus on just one.

4. Energy: ‘Returning to a world that relies on muscle power is not an option’

For futurists, any suggestion that the world moves in reverse is an anathema. For scenario planners, we know that great powers have devolved over the last 2,000 years and there is no reason that some political, technological or environmental issue might not arise that would cause our global reality to reset itself in significant ways. I think it is naïve to say we won’t return to muscle power. In fact, the failure to meet global demand for energy and food may actually move us toward a more local view of energy and food production, one that is less automated and scalable. One of the reasons we have predictions like this is because we haven’t yet envisioned a language for sustainable economics that allows people to talk about the world outside of the bounds of industrial age, scale-level terms. It may well be our penchant for holding on to industrial age models that drives us to the brink. Rather than continuing to figure out how to scale up the world, perhaps we should be thinking about ways to slow it down, restructure it and create models that are sustainable over long periods of time. The green movement is just political window dressing for what is really a more fundamental need to seek sustainability in all aspects of life, and that starts with how we measure success.

5. Advertising: ‘All sorts of things will just be sold in plain packages’

This is just a sort of random prediction that doesn’t seem to mean anything if it happens. I’m not sure the state will control what is advertised, or if people will care how their stuff is packaged. In 4, above, I outline more important issues that would cause us to rethink our overall consumer mentality. If that happens, we may well see world where advertising is irrelevant—completely irrelevant. Let’s see how Madison Avenue plans for its demise (or its new role…) in a sustainable knowledge economy.

6. Neuroscience: ‘We’ll be able to plug information streams directly into the cortex’

This is already possible on a small scale. We have seen hardware interfaces with bugs and birds. The question is, will it be a novelty or will it be a major medical tool or will it be commonplace and accessible or will it be seen as dangerous and be shunned by citizen regulators worried about giving up their humanity and banned by governments who can’t imagine governing the overly connected. Just because we can doesn’t mean we will or we should. I certainly think we may we see a singularity over the next 25 years in hardware, where machines can match human computational power, but I think software will greatly lag hardware. We may be able to connect, but we will do so only had rudimentary levels. On the other hand, a new paradigm for software could evolve that would let machines match us thought for thought. I put that in the black swan category. I am on constant watch for a software genius that will make Gates and Zuckerberg look like quaint 18th-Century industrialists. The next revolution in software will come from a few potential paths, here are two: removal to the barriers to entry that the software industry has created and a return to more accessible computing for the masses (where they develop applications, not just consume content) or a breakthrough in distributed, parallel processing that evolves the ability to match human pattern recognition capabilities, even if the approach appears alien to it inventors. We will have a true artificial intelligence only when we no longer understand the engineering behind its abilities.

7. Physics: ‘Within a decade, we’ll know what dark matter is’

Maybe, but we may also find that dark matter, like the “ether” is just a conceptual plug-in for an incomplete model of the universe. I guess saying that it is a conceptual plug-in for an incomplete model would be an explanation of what it is – so this is one of those predictions that can’t lose. Another perspective: dark matter matters, and not only do we understand what it is, but what it means, and it changes our fundamental view of physics in a way that helps us look at matter and energy through a new lens, one that may help fuel a revolution in energy production and consumption.

8. Food: ‘Russia will become a global food superpower’

Really? Well, this presumes some commercial normality for Russia along with maintaining its risk taking propensity to remove the safeties from technology. If Russia becomes politically stable and economically safe (you can go there without fear for your personal or economic life) then perhaps. I think, however, that this predication is too finite and pointed. We could well see the US, China (or other parts of Asia) or even a terraformed Africa become the major food supplier – biotechnology, perhaps – new forms of distributed farming, also possible. The answer may not be hub-and-spoke, but distributed. We may find our own center locally as the costs of moving food around the world outweighs the industrialization efficiency of its production. It may prove healthier and more efficient to forgo the abundant variety we have become accustomed to (in some parts of the world) and see food again as nutrition, and share the lessons of efficient local production with the increasingly water starved world.

9. Nanotechnology: ‘Privacy will be a quaint obsession’

I don’t get the link between nanotechnology and privacy. It is mentioned once in the narrative, but not in an explanatory way. As a purely hardware technology, it will threaten health (nano-pollutants) and improve health (cellular-level, molecular-level repairs). The bigger issue with nanotechnology is its computational model. If nanotechnology includes the procreation and evolution of artificial things, then we are faced with the difficult challenge of trying to imagine how something will evolve that we have never seen before, and that has never existed in nature. The interplay between nature and nanotechnology will be fascinating and perhaps frightening. Our privacy may be challenged by culture and by software, but I seriously doubt that nanotechnology will be the key to decrypting our banking system (though it could play a role). Nanotechnology is more likely to be a black swan full of surprises that we can’t even begin to imagine today.

10. Gaming: ‘We’ll play games to solve problems’

This one is easy. Of course. We always have and we always will. Problem solutions are games to those who find passion in different problem sets. The difference between a game and a chore is perspective, not the task itself. For a mathematician, solving a quadratic equation is a game. For a literature major, that same equation may be seen as a chore. Taken to the next level, gaming may become a new way to engage with work. We often engineer fun out of work, and that is a shame. We should engineer work experiences to include fun as part of the experience (see my new book, Management by Design), and I don’t mean morale events. If you don’t enjoy your “work” then you will be dissatisfied no matter how much you are paid. Thinking about work as a game, as Ender (Enders Game, Orson Scott Card) did, changes the relationship between work and life. Ender, however, found out, that when you get too far removed from the reality, you may find moral compasses misaligned.

11. Web/internet: ‘Quantum computing is the future’

Quantum computing, like nanotechnology, will change fundamental rules, so it is hard to predict their outcome. We will do better to closely monitor developments than to spend time overspeculating on outcomes that are probably unimaginable. It is better to accept that there are things in the future that are unimaginable now and practice how to deal with unimaginable as an idea than to frustrate ourselves by trying to predict those outcomes. Imagine wicked fast computers—doesn’t really matter if they are quantum or not. Imagine machines that can decrypt anything really quickly using traditional methods, and that create new encryptions that they can’t solve themselves.

On the more mundane note in this article, the issues of net neutrality may play out so that those who pay more get more, though I suspect that will be uneven and change at the whim of politics. What I find curious is that this prediction says nothing about the alternative Internet (see my post Pirates Pine for Alternative Internet on Internet Evolution). I think we should also plan for very different information models and more data-centric interaction—in other words, we may we find ourselves talking to data rather than servers in the future.

I’m not sure the next Internet will come from Waterloo, Ontario and its physicists, but from acts of random assertions by smart, tech-savvy idealists who want to take back our intellectual backbone from advertisers and cable companies.

One black swan this prediction fails to account for is the possibility of a loss of trust in the Internet all together if it is hacked or otherwise challenged (by a virus, or made unstable by an attack on power grids or network routers). Cloud computing is based on trust. Microsoft and Google recently touted the uptime of their business offerings (Microsoft: BPOS Components Average 99.9-plus Percent Uptime). If some nefarious group takes that as a challenge (or sees the integrity of banking transactions as a challenge), we could see widespread distrust of the Net and the Cloud and a rapid return to closed, proprietary, non-homogeneous systems that confound hackers by their variety as much as they confound those who operate them.

12. Fashion: ‘Technology creates smarter clothes’

A model on the catwalk during the Gareth Pugh show at London Fashion Week in 2008. Photograph: Leon Neal/AFP/Getty Images

Smarter perhaps, put from the picture above which, not necessarily fashion forward. I think we will see technology integrated with what we wear, and I think smart materials will also redefine other aspects of our lives and create a new manufacturing industry, even in places where manufacturing has been displaced. In the US, for instance, smart materials will not require retrofitting legacy manufacturing facilities, but will require the creation of entirely new facilities that can be created with design and sustainability from their onset. However, smart clothes, other uses of smart materials and personal technology integration all require a continued positive connection between people and technology. That connection looks positive, but we may be be blind to technology push-backs, even rebellions, fostered in current events like the jobless recovery.

13. Nature: ‘We’ll redefine the wild’

I like this one and think it is inevitable, but I also think it is a rather easy prediction to make. It is less easy to see all the ways nature could be redefined. Professor Mace predicts managed protected areas and a continued loss of biodiversity. I think we are at a transition point, and 25 years isn’t enough time to see its conclusion. The rapid influx of “invasive” species with indigenous species creates not just displacement, but offer an opportunity for recreation of environments (read evolution). We have to remember that historically the areas we are trying to protect were very different in the past than they are in our rather short collective memories. We are trying to protect a moment in history for human nostalgia. The changes in the environment presage other changes that may well take place after we have gone. Come to Earth a 1,000 years from now and we may be hard pressed to find anything that is as we experience it today. The general landscape may appear the same at the highest level of fractal magnification, but zoom in and you will find the details will shifted as much as the forests of Europe or the nesting grounds of the Dodo bird have changed over the last 1,000 years.

14. Architecture: What constitutes a ‘city’ will change

I like this prediction because it runs the gamut from distribution of power to returning to caves. It actually represents the idea using scenario thinking. I will keep this brief because Rowan Moore gets it when he writes: “To be optimistic, the human genius for inventing social structures will mean that new forms of settlement we can’t quite imagine will begin to emerge.”

15. Sport: ‘Broadcasts will use holograms’

I guess in a sustainable knowledge economy we will still have sport. I hope we figure out how to monitor the progress of our favorite teams without the creation and collection of non-biodegradable artifacts like Styrofoam number one hands and collectable beverage containers.

As for sport itself, it will be an early adopter of any new broadcast technology. I’m not sure holograms in their traditional sense will be one, however. I’m guessing we figure out 3-D with a lot less technology than holograms require.

I challenge Mr. Lee’s statements on the acceptance of performance-enhancing drugs: “I don’t think we’ll see acceptance as the trend has been towards zero tolerance and long may it remain so.” I think it is just as likely that we start seeing performance enhancement as OK, given the wide proliferation of AD/HD drugs being prescribed, as well as those being used off label for mental enhancement—not to mention the accepted use of drugs by the military (see Troops need to remember, New Scientist, 09 December 2010). I think we may well see an asterisk in the record books a decade or so from now that says, “at this point we realized sport was entertainment, and allowed the use of drugs, prosthetics and other enhancements that increased performance and entertainment value.”

16. Transport: ‘There will be more automated cars’

Yes, if we still have cars, they will likely be more automated. And in a decade, we will likely still see cars, but we may be at the transition point for the adoption of a sustainable knowledge economy where cars start to look arcane. We will see continued tension between the old industrial sectors typified by automobile manufacturers and oil exploration and refining companies, and the technology and healthcare firms that see value and profits in more local ways of staying connected and ways to move that don’t involve internal combustion engines (or electric ones for that matter).

17. Health: ‘We’ll feel less healthy’

Maybe, as Mulgan points out, healthcare isn’t radical, but people can be radical. These uncertainties around health could come down to personal choice. We may find millions of excuses for not taking care of ourselves and then placing the burden of our unhealthy lifestyles at the feet of the public sector, or we may figure out that we are part of the sustainable equation as well. The later would transform healthcare. Some of the arguments above, about distribution and localism may also challenge the monolithic hospitals to become more distributed, as we are seeing with the rise of community-based clinics in the US and Europe. Management of healthcare may remain centralized, but delivery may be more decentralized. Of course, if economies continue to teeter, the state will assert itself and keep everything close and in as few buildings as possible.

As for electronic records, it will be the value to the end user that drives adoption. As soon as patients believe they need an electronic healthcare record as much as they need a little blue pill, we will see the adoption of the healthcare record. Until then, let the professionals do whatever they need to do to service me—the less I know the better. In a sustainable knowledge economy though, I will run my own analytics and use the results to inform my choices and actions. Perhaps we need healthcare analytics companies to start advertising to consumers as much as pharmaceutical companies currently do.

18. Religion: ‘Secularists will flatter to deceive’

I think religion may well see traditions fall, new forms emerge and fundamentalist dig in their heels. Religion offers social benefits that will be augmented by social media—religion acts as a pervasive and public filter for certain beliefs and cultural norms in a way that other associations do not. Over the next 25 years many of the more progressive religious movements may tap into their social side and reinvent themselves around association of people rather than affiliation with tenets of faith. If however, any of the dire scenarios come to pass, look for state asserted use of religion to increase, and for a rising tide of fundamentalism as people try to hold on to what they can of the old way of doing things.

19. Theatre: ‘Cuts could force a new political fringe’

Theatre has always had an edge, and any new fringe movement is likely to find it manifestation in art, be it theatre, song, poetry or painting. I would have preferred that the idea of art be taken up as a predication rather than theatre in isolation. If we continue to automate and displace workers, we will need to reassess our general abandonment of the arts as a way of making a living because creation will be the one thing that can’t be automated. We will need to find ways to pay people for human endeavors, everything from teaching to writing poetry. The fringe may turn out to be the way people stay engaged.

20 Storytelling: ‘Eventually there’ll be a Twitter classic’

Stories are already ubiquitous. We live in stories. Technology has changed our narrative form, not our longing for a narrative. The twitter stream is a narrative channel. I would not, however, anticipate a “twitter classic” because a classic suggests the idea of something lasting. For a “twitter classic” to occur, the 140-character phrases would need to be extracted from their medium and held someplace beyond the context is which they were created, which would make twitter just another version of the typewriter or word processor—either that or Twitter figures out a better mode for persistent retrieval of tweets with associated metadata—in others words, you could query the story out of the twitter-verse, which is very technically possible (and may make for some collaborative branching as well). But in the end, twitter is just a repository for writing, just one of many, which doesn’t make this prediction all that concept shattering.

This post is long enough, so I won’t start listing all of the areas the Guardian failed to tackle, or its internal lack of categorical consistency (e.g., Theatre and storytelling are two sides of the same idea). I hope these observations help you engage more deeply with these ideas and with the future more generally, but most importantly, I hope they help you think about navigating the next 25 years, not relying on prescience from people with no more insight than you and I. The trick with the future is to be nimble, not to be right.


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome

I want self knowledge. It’s part of what I do in life. For me it isn’t work, it’s love, but by the same token, it isn’t for everybody, nor should it be. There’s no money in it, not everyone feels passionate about it, not everyone has the aptitude, many are turned off by introspection, considering it a waste of time and many don’t believe in ‘that sort of thing.’ Well, I enjoy educating myself, and I get part of my ongoing education and a sense of satisfaction from ‘that sort of thing’ that also harmonizes with my supporting the work of the Lifeboat Foundation.

At the same time I’m aware of a certain ‘unconscious’ role that I forged in my early life crucible so as to get me what I wanted at a time when my thinking and my ‘worldview’ were primitive to say the least. What might anyone ‘want’ in such a situation? Imagine. Using whatever genetic and epigenetic equipment entered this life with me I interacted in complexity with the other participants in the crucible, emerging as … what? Here lie the origins of liberated or not,according to psychological dynamic thinking.

Notice how hard it is to get rid of that ‘I.’ I wish I knew more about my ‘I.’

Well, enough of that, so for now, in one way or another I resolved my early life core dilemma in a way that left a pattern. A role in a drama learned early on in life endures. It endures, firstly because certain psycho-biological infrastructure is embedded in various functions of ‘me’ and secondly because my drama serves a purpose for me. If I didn’t use it, it would fade away in disuse. I value it. Simplistically said, if I ‘succeed’ it’s because I’m superior, if I ‘fail’ it’s because I’m misunderstood. A hero in a world of fools. My drama is my treasure, I’ll resist if someone tries to persuade or coerce me to let go of my treasure, and if I imagine it’s the only tool I have, I can’t imagine life without it. Who said that life was rational?

Then suddenly one day I’m an adult-nothing to do with chronological age-and, yes, it can happen suddenly, and I see my treasure as a load on my back, a burden, a fantasy born in fantasy. So why not dump it? But what about…? What if … ? It’s still hard to imagine life without it. Well, maybe I can bargain. Maybe I’ll undertake a program of ‘reeducation’ or therapy in which I’ll hear what I want to hear, then I can have my cake and eat it.

And I’ll continue blaming others or circumstances for what I don’t like about life. Right back in crucible mode.

Hunched long term in my crucible while knowing better, I’ll not only dislike myself but I’ll also feel guilty, and feeling guilty I’ll escalate until eventually I’ll hate myself. And you know what? Out of self hate come the ‘isms.’ What is racism but self loathing projected!

Why is all this important? Why should anyone care? To some it may sound like a lot of navel gazing anyway. Well, folks, listen to this: What if secret desires conflict significantly with a socially adopted role? There lie the ingredients of a psychological ‘double life.’ And you know what, it manifests. I might just sabotage my own efforts to get what I want in my life drama. Or what if I find myself in an impossible situation that I hate and I want out? I can always create a scene. And what about espionage, industrial spying, political spying? How about hacking? How about the destructive use of technology yet to be developed? Well, is the life drama important, or isn’t it?

There’s more: It’s just possible that wars in the air, on land and on sea originate in battles originating in early life crucibles. The war within becoming the war without.

Without fear there can be no courage, to paraphrase Eddie Rickenbacker, the great American flying hero, a man who happened to know something about fear and also about courage.

To be sure, in the never-ending search for truth there is and there probably cannot be any rigid ritualized method. We don’t have a unified theory of the human condition, and bottom line when I examine mind with mind, I find plenty of mystery to tickle my sense of wonderment.

Not to forget the ‘I.’ I wonder where the ‘I’ comes from. What can it be?

In conclusion, the only advice I can give is to myself, the only life role I can identify is my own, and only I can come to grips with the egocentricity that is my own life drama …

It’s a beginning.

I’ve been an entrepreneur most of my adult life. Recently, on a long business flight, I began thinking about what it takes to become successful as an entrepreneur — and how I would even define the meaning “success” itself. The two ideas became more intertwined in my thinking: success as an entrepreneur, entrepreneurial success. I’ve given a lot of talks over the years on the subject of entrepreneurship. The first thing I find I have to do is to dispel the persistent myth that entrepreneurial success is all about innovative thinking and breakthrough ideas. I’ve found that entrepreneurial success usually comes through great execution, simply by doing a superior job of doing the blocking and tackling.

But what else does it take to succeed as an entrepreneur — and how should an entrepreneur define success?

Bored with the long flight, sinking deeper into my own thoughts, I wrote down my own answers.

Here’s what I came up with, a “Top Ten List” if you will:

10. You must be passionate about what you are trying to achieve. That means you’re willing to sacrifice a large part of your waking hours to the idea you’ve come up with. Passion will ignite the same intensity in the others who join you as you build a team to succeed in this endeavor. And with passion, both your team and your customers are more likely to truly believe in what you are trying to do.

9. Great entrepreneurs focus intensely on an opportunity where others see nothing. This focus and intensity helps to eliminate wasted effort and distractions. Most companies die from indigestion rather than starvation i.e. companies suffer from doing too many things at the same time rather than doing too few things very well. Stay focused on the mission.

8. Success only comes from hard work. We all know that there is no such thing as overnight success. Behind every overnight success lies years of hard work and sweat. People with luck will tell you there’s no easy way to achieve success — and that luck comes to those who work hard. Successful entrepreneurs always give 100% of their efforts to everything they do. If you know you are giving your best effort, you’ll never have any reason for regrets. Focus on things you can control; stay focused on your efforts and let the results be what they will be.

7. The road to success is going to be long, so remember to enjoy the journey. Everyone will teach you to focus on goals, but successful people focus on the journey and celebrate the milestones along the way. Is it worth spending a large part of your life trying to reach the destination if you didn’t enjoy the journey along the way? Won’t the team you attract to join you on your mission also enjoy the journey more as well? Wouldn’t it be better for all of you to have the time of your life during the journey, even if the destination is never reached?

6. Trust your gut instinct more than any spreadsheet. There are too many variables in the real world that you simply can’t put into a spreadsheet. Spreadsheets spit out results from your inexact assumptions and give you a false sense of security. In most cases, your heart and gut are still your best guide. The human brain works as a binary computer and can only analyze the exact information-based zeros and ones (or black and white). Our heart is more like a chemical computer that uses fuzzy logic to analyze information that can’t be easily defined in zeros and ones. We’ve all had experiences in business where our heart told us something was wrong while our brain was still trying to use logic to figure it all out. Sometimes a faint voice based on instinct resonates far more strongly than overpowering logic.

5. Be flexible but persistent — every entrepreneur has to be agile in order to perform. You have to continually learn and adapt as new information becomes available. At the same time you have to remain persistent to the cause and mission of your enterprise. That’s where that faint voice becomes so important, especially when it is giving you early warning signals that things are going off-track. Successful entrepreneurs find the balance between listening to that voice and staying persistent in driving for success — because sometimes success is waiting right across from the transitional bump that’s disguised as failure.

4. Rely on your team — It’s a simple fact: no individual can be good at everything. Everyone needs people around them who have complimentary sets of skills. Entrepreneurs are an optimistic bunch of people and it’s very hard for them to believe that they are not good at certain things. It takes a lot of soul searching to find your own core skills and strengths. After that, find the smartest people you can who compliment your strengths. It’s easy to get attracted to people who are like you; the trick is to find people who are not like you but who are good at what they do — and what you can’t do.

3. Execution, execution, execution — unless you are the smartest person on earth (and who is) it’s likely that many others have thought about doing the same thing you’re trying to do. Success doesn’t necessarily come from breakthrough innovation but from flawless execution. A great strategy alone won’t win a game or a battle; the win comes from basic blocking and tackling. All of us have seen entrepreneurs who waste too much time writing business plans and preparing PowerPoints. I believe that a business plan is too long if it’s more than one page. Besides, things never turn out exactly the way you envisioned them. No matter how much time you spend perfecting the plan, you still have to adapt according to the ground realities. You’re going to learn a lot more useful information from taking action rather than hypothesizing. Remember — stay flexible and adapt as new information becomes available.

2. I can’t imagine anyone ever achieving long-term success without having honesty and integrity. These two qualities need to be at the core of everything we do. Everybody has a conscience — but too many people stop listening to it. There is always that faint voice that warns you when you are not being completely honest or even slightly off track from the path of integrity. Be sure to listen to that voice.

1. Success is a long journey and much more rewarding if you give back. By the time you get to success, lots of people will have helped you along the way. You’ll learn, as I have, that you rarely get a chance to help the people who helped you because in most cases, you don’t even know who they were. The only way to pay back the debts we owe is to help people we can help — and hope they will go on to help more people. When we are successful, we draw so much from the community and society that we live in we should think in terms of how we can help others in return. Sometimes it’s just a matter of being kind to people. Other times, offering a sympathetic ear or a kind word is all that’s needed. It’s our responsibility to do “good” with the resources we have available.

Measuring Success — Hopefully, you have internalized the secrets of becoming a successful entrepreneur. The next question you are likely to ask yourself is: How do we measure success? Success, of course, is very personal; there is no universal way of measuring success. What do successful people like Bill Gates and Mother Teresa have in common? On the surface it’s hard to find anything they share — and yet both are successful. I personally believe the real metric of success isn’t the size of your bank account. It’s the number of lives where you might be able to make a positive difference. This is the measure of success we need to apply while we are on our journey to success.

Naveen Jain is a philanthropist, entrepreneur and technology pioneer. He is a founder and CEO of Intelius, a Seattle-based company that empowers consumers with information to make intelligent decisions about personal safety and security. Prior to Intelius, Naveen Jain founded InfoSpace and took it public in 1998 on NASDAQ. Naveen Jain has been awarded many honors for his entrepreneurial successes and leadership skills including “Ernst & Young Entrepreneur of the Year”, “Albert Einstein Technology Medal” for pioneers in technology, “Top 20 Entrepreneurs” by Red Herring, “Six People Who Will Change the Internet” by Information Week, among other honors.

My generation was the last one to learn to use a slide rule in school. Today that skill is totally obsolete. So is the ability to identify the Soviet Socialist Republics on a map, the ability to write an operation in FORTAN, or how to drive a car with a standard transmission.

We live in a world of instant access to information and where technology is making exponential advances in synthetic biology, nanotechnology, genetics, robotics, neuroscience and artificial intelligence. In this world, we should not be focused on improving the classrooms but should be devoting resources to improving the brains that the students bring to that classroom.

To prepare students for this high-velocity, high-technology world the most valuable skill we can teach them is to be better learners so they can leap from one technological wave to the next. That means education should not be about modifying the core curricula of our schools but should be about building better learners by enhancing each student’s neural capacities and motivation for life-long learning.

Less than two decades ago this concept would have been inconceivable. We used to think that brain anatomy (and hence learning capacity) was fixed at birth. But recent breakthroughs in the neuroscience of learning have demonstrated that this view is fundamentally wrong.

In the past few decades, neuroscience research has demonstrated that, contrary to popular belief, the brain is not static. Rather, it is highly modifiable (“plastic”) throughout life, and this remarkable “neuroplasticity” is primarily experience-dependent. Neuroplasticity research shows that the brain changes its very structure with each different activity it performs, perfecting its circuits so it is better suited to the task at hand. Neurological capacities and competencies are both measurable and significantly consequential to educational outcomes.

This means that the neural capacities that form the building blocks for learning — attention & focus, memory, prediction & modeling, processing speed, spatial skills, and executive functioning — can be improved throughout life through training. Just as physical exercise is a well-known and well-accepted means to improve health for anyone, regardless of age or background, so too can the brain be put “into shape” for optimal learning.

If any of these neural capacities are enhanced, you would see significant improvements in a person’s ability to understand and master new situations.

While these basic neural capacities are well known by scientists and clinicians today, they are rarely used to develop students into better learners by schools, teachers or parents. There is too little awareness and too few tools available for enhancing a student’s capacity and ability to learn. The failure to focus on optimizing each student’s neural capacities for learning is resulting in widespread failure of the educational systems, particularly for the underprivileged.

Gone are the days when you could equip students with slide rules and a core of knowledge and skills and expect them to achieve greatness. Our children already inhabit a world where new game platforms and killer apps appear and are surpassed in dizzying profusion and speed. They are already adapting to the dynamics of the 21st century. But we can help them adapt more methodically and systematically by focusing our attention on improving their capacity to learn throughout their lives.

This far-reaching and potentially revolutionary conclusion is based on recent research breakthroughs and thus may be contrary to the past beliefs of many teachers, administrators, parents and students, who have historically emphasized classroom size and curriculum as the key to improved learning.

Just as new knowledge and understanding is revolutionizing the way we communicate, trade, or practice medicine so too must it transform the way we learn. For students, that revolution is already well under way but it’s happening outside of their schools. We owe it to them to equip them with all the capabilities they’ll need to thrive in the limitless world beyond the classroom.

I believe that while it’s important to leave better country for our children, it’s more important that we leave better children for our country.

Naveen Jain is a philanthropist, entrepreneur and technology pioneer. He is a founder and CEO of Intelius, a Seattle-based company that empowers consumers with information to make intelligent decisions about personal safety and security. Prior to Intelius, Naveen Jain founded InfoSpace and took it public in 1998 on NASDAQ. Naveen Jain has been awarded many honors for his entrepreneurial successes and leadership skills including “Ernst & Young Entrepreneur of the Year”, “Albert Einstein Technology Medal” for pioneers in technology, “Top 20 Entrepreneurs” by Red Herring, “Six People Who Will Change the Internet” by Information Week, among other honors.

Some countries are a threat as possible sources of global risk. First of all we are talking about countries which have developed, but poorly controlled military programs, as well as the specific motivation that drives them to create a Doomsday weapon. Usually it is a country that is under threat of attack and total conquest, and in which the control system rests on a kind of irrational ideology.

The most striking example of such a global risk are the efforts of North Korea’s to weaponize Avian Influenza (North Korea trying to weaponize bird flu http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=50093), which may lead to the creation of the virus capable of destroying most of Earth’s population.

There is not really important, what is primary: an irrational ideology, increased secrecy, the excess of military research and the real threat of external aggression. Usually, all these causes go hand in hand.

The result is the appearance of conditions for creating the most exotic defenses. In addition, an excess of military scientists and equipment allows individual scientists to be, for example, bioterrorists. The high level of secrecy leads to the fact that the state as a whole does not know what they are doing in some labs.

In addition, these dangerous countries might be hiding under the guise of seemingly prosperous and democratic countries in their military-industrial complex. This list surely do not include poorest countries (Mali) and small rich countries (like Denmark).

List of countries, like falling in this category:
Well known rogue-nations: S. Korea, Iran, Pakistan,
“superpower”: China, Russia and the U.S..
Weak rogue nation: Burma, Syria and other poor countries.
In addition, a paranoid military program could be inside the outwardly prosperous countries: Japan, Taiwan, Switzerland.
This list does not include countries with a developed, but are rational and have controlled military program: Israel, Britain, France, etc. Also, the list do not include contries, which only buy weapons, like Saudi Arabia.

There is the idea of preemptive strikes against rogue countries which goal is regime change in all these countries, and through this create a safer world. This concept can be called “Rumsfeld Doctrine”. Under this doctrine were conducted military operations in Afghanistan and especially in Iraq at the beginning of the XXI century. However, this doctrine has collapsed, as weapons of mass destruction has not been found in Iraq. A more powerful adversaries such as Iran and North Korea, were too strong for United States, and, moreover, they intensified the development of weapons of mass destruction.

The strike itself could provoke rogue nation to use weapons which are already created, for example, biological weapons. Or control over bioweapon will be lost during destruction of buildings, where they are stored. Furthermore, chaos can lead to attempts to sell such weapons. However, the longer is delayed the decision of the problem, the more these countries will have time to enrich, to accumulate, to grow up.

My opinion is that at the current stage of history we should not attempt to bomb all potential rogue nations. But in principle it is a good thing that regime of Saddam Hussein was toppled, as it is unknown, what it now would do, if it still exist. We should wait near-singularity era, when one country through the development of nanotech, and / or AI will be in a situation to disarm as painlessly as possible their potential enemies, as it will be stronger than they in thousands times. Since the exponential acceleration of progress will lead to the fact that the gap between the strongest and laggards will continue to grow and advance on the usual 2–10 years will be equivalent to the advance in centuries.