Toggle light / dark theme

The Global Brain and its role in Human Immortality

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

GC Lingua Franca(s)

This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

Human Biological Immortality in 50 years

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA (organic life — molecules: billions of years)

2. Neuron (effective cells: millions of years)

3. Brain (complex organisms — Homo sapiens: thousands of years)

4. Society (formation of effective societies: several centuries)

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ

My Reaction to The Observer’s 20 predictions for the next 25 years

The UK’s Observer just put out a set of predictions for the next 25 years (20 predictions for the next 25 years). I will react to each of them individually. More generally, however, these are the kinds of ideas that get headlines, but they don’t constitute good journalism. Scenario planning should be used in all predictive coverage. It is, to me, the most honest way to admit not knowing and documenting the uncertainties of the future—the best way to examine big issues through different lenses. Some of these predictions may well come to pass, but many will not. What this article fails to do, is inform the reader about the ways the predictions may vary from the best guess, and what the possible alternatives may be—and where they simply don’t know.

1. Geopolitics: ‘Rivals will take greater risks against the US’

This is a pretty non-predictive prediction. America’s rivals are already challenging its monetary policy, human rights stances, shipping channels and trade policies. The article states that the US will remain the world’s major power. It does not suggest that Globalization could fracture the world so much that regional powers huddle against the US in various places, essentially creating stagnation and a new localism that causes us to reinvent all economies. It also does not foresee anyone acting on the water rights, food, energy or nuclear proliferation. Any of those could set off major conflicts that completely disrupt our economic and political models, leading to major resets in assumptions about the future.

2. The UK economy: ‘The popular revolt against bankers will become impossible to resist’

British banks will not fall without taking much of the world financial systems with them. I like the idea of the reinvention of financial systems, though I think it is far too early to predict their shape. Banking is a major force that will evolve in emergent ways. For scenario planners, the uncertainty is about the fate of existing financial systems. Planners would do well to imagine multiple ways the institution of banking will reshape itself, not prematurely bet on any one outcome.

3. Global development: ‘A vaccine will rid the world of AIDS’

We can only hope so. Investment is high, but it is not the major cause of death in the world. Other infectious and parasitic diseases still outstrip HIV/AIDS by a large margin, while cardiovascular diseases and cancer even eclipse those. So it is great to predict the end of one disease, but the prediction seems rather arbitrary. I think it would be more advantageous to rate various research programs against potential outcomes over the next 25 years and look at the impact of curing those diseases on different parts of the world. If we tackle, for instance, HIV/AIDS and malaria and diarrhea diseases, what would that do to increase the safety of people in Africa and Asia? What would the economic and political ramifications be? We also have to consider the cost of the cure and the cost of its distribution. Low cost solutions that can easily be distributed will have higher impact than higher cost solutions that limit access (as we have with current HIV/AIDS treatments) I think we will see multiple breakthroughs over the next 25 years and we would do well to imagine the implications of sets of those, not focus on just one.

4. Energy: ‘Returning to a world that relies on muscle power is not an option’

For futurists, any suggestion that the world moves in reverse is an anathema. For scenario planners, we know that great powers have devolved over the last 2,000 years and there is no reason that some political, technological or environmental issue might not arise that would cause our global reality to reset itself in significant ways. I think it is naïve to say we won’t return to muscle power. In fact, the failure to meet global demand for energy and food may actually move us toward a more local view of energy and food production, one that is less automated and scalable. One of the reasons we have predictions like this is because we haven’t yet envisioned a language for sustainable economics that allows people to talk about the world outside of the bounds of industrial age, scale-level terms. It may well be our penchant for holding on to industrial age models that drives us to the brink. Rather than continuing to figure out how to scale up the world, perhaps we should be thinking about ways to slow it down, restructure it and create models that are sustainable over long periods of time. The green movement is just political window dressing for what is really a more fundamental need to seek sustainability in all aspects of life, and that starts with how we measure success.

5. Advertising: ‘All sorts of things will just be sold in plain packages’

This is just a sort of random prediction that doesn’t seem to mean anything if it happens. I’m not sure the state will control what is advertised, or if people will care how their stuff is packaged. In 4, above, I outline more important issues that would cause us to rethink our overall consumer mentality. If that happens, we may well see world where advertising is irrelevant—completely irrelevant. Let’s see how Madison Avenue plans for its demise (or its new role…) in a sustainable knowledge economy.

6. Neuroscience: ‘We’ll be able to plug information streams directly into the cortex’

This is already possible on a small scale. We have seen hardware interfaces with bugs and birds. The question is, will it be a novelty or will it be a major medical tool or will it be commonplace and accessible or will it be seen as dangerous and be shunned by citizen regulators worried about giving up their humanity and banned by governments who can’t imagine governing the overly connected. Just because we can doesn’t mean we will or we should. I certainly think we may we see a singularity over the next 25 years in hardware, where machines can match human computational power, but I think software will greatly lag hardware. We may be able to connect, but we will do so only had rudimentary levels. On the other hand, a new paradigm for software could evolve that would let machines match us thought for thought. I put that in the black swan category. I am on constant watch for a software genius that will make Gates and Zuckerberg look like quaint 18th-Century industrialists. The next revolution in software will come from a few potential paths, here are two: removal to the barriers to entry that the software industry has created and a return to more accessible computing for the masses (where they develop applications, not just consume content) or a breakthrough in distributed, parallel processing that evolves the ability to match human pattern recognition capabilities, even if the approach appears alien to it inventors. We will have a true artificial intelligence only when we no longer understand the engineering behind its abilities.

7. Physics: ‘Within a decade, we’ll know what dark matter is’

Maybe, but we may also find that dark matter, like the “ether” is just a conceptual plug-in for an incomplete model of the universe. I guess saying that it is a conceptual plug-in for an incomplete model would be an explanation of what it is – so this is one of those predictions that can’t lose. Another perspective: dark matter matters, and not only do we understand what it is, but what it means, and it changes our fundamental view of physics in a way that helps us look at matter and energy through a new lens, one that may help fuel a revolution in energy production and consumption.

8. Food: ‘Russia will become a global food superpower’

Really? Well, this presumes some commercial normality for Russia along with maintaining its risk taking propensity to remove the safeties from technology. If Russia becomes politically stable and economically safe (you can go there without fear for your personal or economic life) then perhaps. I think, however, that this predication is too finite and pointed. We could well see the US, China (or other parts of Asia) or even a terraformed Africa become the major food supplier – biotechnology, perhaps – new forms of distributed farming, also possible. The answer may not be hub-and-spoke, but distributed. We may find our own center locally as the costs of moving food around the world outweighs the industrialization efficiency of its production. It may prove healthier and more efficient to forgo the abundant variety we have become accustomed to (in some parts of the world) and see food again as nutrition, and share the lessons of efficient local production with the increasingly water starved world.

9. Nanotechnology: ‘Privacy will be a quaint obsession’

I don’t get the link between nanotechnology and privacy. It is mentioned once in the narrative, but not in an explanatory way. As a purely hardware technology, it will threaten health (nano-pollutants) and improve health (cellular-level, molecular-level repairs). The bigger issue with nanotechnology is its computational model. If nanotechnology includes the procreation and evolution of artificial things, then we are faced with the difficult challenge of trying to imagine how something will evolve that we have never seen before, and that has never existed in nature. The interplay between nature and nanotechnology will be fascinating and perhaps frightening. Our privacy may be challenged by culture and by software, but I seriously doubt that nanotechnology will be the key to decrypting our banking system (though it could play a role). Nanotechnology is more likely to be a black swan full of surprises that we can’t even begin to imagine today.

10. Gaming: ‘We’ll play games to solve problems’

This one is easy. Of course. We always have and we always will. Problem solutions are games to those who find passion in different problem sets. The difference between a game and a chore is perspective, not the task itself. For a mathematician, solving a quadratic equation is a game. For a literature major, that same equation may be seen as a chore. Taken to the next level, gaming may become a new way to engage with work. We often engineer fun out of work, and that is a shame. We should engineer work experiences to include fun as part of the experience (see my new book, Management by Design), and I don’t mean morale events. If you don’t enjoy your “work” then you will be dissatisfied no matter how much you are paid. Thinking about work as a game, as Ender (Enders Game, Orson Scott Card) did, changes the relationship between work and life. Ender, however, found out, that when you get too far removed from the reality, you may find moral compasses misaligned.

11. Web/internet: ‘Quantum computing is the future’

Quantum computing, like nanotechnology, will change fundamental rules, so it is hard to predict their outcome. We will do better to closely monitor developments than to spend time overspeculating on outcomes that are probably unimaginable. It is better to accept that there are things in the future that are unimaginable now and practice how to deal with unimaginable as an idea than to frustrate ourselves by trying to predict those outcomes. Imagine wicked fast computers—doesn’t really matter if they are quantum or not. Imagine machines that can decrypt anything really quickly using traditional methods, and that create new encryptions that they can’t solve themselves.

On the more mundane note in this article, the issues of net neutrality may play out so that those who pay more get more, though I suspect that will be uneven and change at the whim of politics. What I find curious is that this prediction says nothing about the alternative Internet (see my post Pirates Pine for Alternative Internet on Internet Evolution). I think we should also plan for very different information models and more data-centric interaction—in other words, we may we find ourselves talking to data rather than servers in the future.

I’m not sure the next Internet will come from Waterloo, Ontario and its physicists, but from acts of random assertions by smart, tech-savvy idealists who want to take back our intellectual backbone from advertisers and cable companies.

One black swan this prediction fails to account for is the possibility of a loss of trust in the Internet all together if it is hacked or otherwise challenged (by a virus, or made unstable by an attack on power grids or network routers). Cloud computing is based on trust. Microsoft and Google recently touted the uptime of their business offerings (Microsoft: BPOS Components Average 99.9-plus Percent Uptime). If some nefarious group takes that as a challenge (or sees the integrity of banking transactions as a challenge), we could see widespread distrust of the Net and the Cloud and a rapid return to closed, proprietary, non-homogeneous systems that confound hackers by their variety as much as they confound those who operate them.

12. Fashion: ‘Technology creates smarter clothes’

A model on the catwalk during the Gareth Pugh show at London Fashion Week in 2008. Photograph: Leon Neal/AFP/Getty Images

Smarter perhaps, put from the picture above which, not necessarily fashion forward. I think we will see technology integrated with what we wear, and I think smart materials will also redefine other aspects of our lives and create a new manufacturing industry, even in places where manufacturing has been displaced. In the US, for instance, smart materials will not require retrofitting legacy manufacturing facilities, but will require the creation of entirely new facilities that can be created with design and sustainability from their onset. However, smart clothes, other uses of smart materials and personal technology integration all require a continued positive connection between people and technology. That connection looks positive, but we may be be blind to technology push-backs, even rebellions, fostered in current events like the jobless recovery.

13. Nature: ‘We’ll redefine the wild’

I like this one and think it is inevitable, but I also think it is a rather easy prediction to make. It is less easy to see all the ways nature could be redefined. Professor Mace predicts managed protected areas and a continued loss of biodiversity. I think we are at a transition point, and 25 years isn’t enough time to see its conclusion. The rapid influx of “invasive” species with indigenous species creates not just displacement, but offer an opportunity for recreation of environments (read evolution). We have to remember that historically the areas we are trying to protect were very different in the past than they are in our rather short collective memories. We are trying to protect a moment in history for human nostalgia. The changes in the environment presage other changes that may well take place after we have gone. Come to Earth a 1,000 years from now and we may be hard pressed to find anything that is as we experience it today. The general landscape may appear the same at the highest level of fractal magnification, but zoom in and you will find the details will shifted as much as the forests of Europe or the nesting grounds of the Dodo bird have changed over the last 1,000 years.

14. Architecture: What constitutes a ‘city’ will change

I like this prediction because it runs the gamut from distribution of power to returning to caves. It actually represents the idea using scenario thinking. I will keep this brief because Rowan Moore gets it when he writes: “To be optimistic, the human genius for inventing social structures will mean that new forms of settlement we can’t quite imagine will begin to emerge.”

15. Sport: ‘Broadcasts will use holograms’

I guess in a sustainable knowledge economy we will still have sport. I hope we figure out how to monitor the progress of our favorite teams without the creation and collection of non-biodegradable artifacts like Styrofoam number one hands and collectable beverage containers.

As for sport itself, it will be an early adopter of any new broadcast technology. I’m not sure holograms in their traditional sense will be one, however. I’m guessing we figure out 3-D with a lot less technology than holograms require.

I challenge Mr. Lee’s statements on the acceptance of performance-enhancing drugs: “I don’t think we’ll see acceptance as the trend has been towards zero tolerance and long may it remain so.” I think it is just as likely that we start seeing performance enhancement as OK, given the wide proliferation of AD/HD drugs being prescribed, as well as those being used off label for mental enhancement—not to mention the accepted use of drugs by the military (see Troops need to remember, New Scientist, 09 December 2010). I think we may well see an asterisk in the record books a decade or so from now that says, “at this point we realized sport was entertainment, and allowed the use of drugs, prosthetics and other enhancements that increased performance and entertainment value.”

16. Transport: ‘There will be more automated cars’

Yes, if we still have cars, they will likely be more automated. And in a decade, we will likely still see cars, but we may be at the transition point for the adoption of a sustainable knowledge economy where cars start to look arcane. We will see continued tension between the old industrial sectors typified by automobile manufacturers and oil exploration and refining companies, and the technology and healthcare firms that see value and profits in more local ways of staying connected and ways to move that don’t involve internal combustion engines (or electric ones for that matter).

17. Health: ‘We’ll feel less healthy’

Maybe, as Mulgan points out, healthcare isn’t radical, but people can be radical. These uncertainties around health could come down to personal choice. We may find millions of excuses for not taking care of ourselves and then placing the burden of our unhealthy lifestyles at the feet of the public sector, or we may figure out that we are part of the sustainable equation as well. The later would transform healthcare. Some of the arguments above, about distribution and localism may also challenge the monolithic hospitals to become more distributed, as we are seeing with the rise of community-based clinics in the US and Europe. Management of healthcare may remain centralized, but delivery may be more decentralized. Of course, if economies continue to teeter, the state will assert itself and keep everything close and in as few buildings as possible.

As for electronic records, it will be the value to the end user that drives adoption. As soon as patients believe they need an electronic healthcare record as much as they need a little blue pill, we will see the adoption of the healthcare record. Until then, let the professionals do whatever they need to do to service me—the less I know the better. In a sustainable knowledge economy though, I will run my own analytics and use the results to inform my choices and actions. Perhaps we need healthcare analytics companies to start advertising to consumers as much as pharmaceutical companies currently do.

18. Religion: ‘Secularists will flatter to deceive’

I think religion may well see traditions fall, new forms emerge and fundamentalist dig in their heels. Religion offers social benefits that will be augmented by social media—religion acts as a pervasive and public filter for certain beliefs and cultural norms in a way that other associations do not. Over the next 25 years many of the more progressive religious movements may tap into their social side and reinvent themselves around association of people rather than affiliation with tenets of faith. If however, any of the dire scenarios come to pass, look for state asserted use of religion to increase, and for a rising tide of fundamentalism as people try to hold on to what they can of the old way of doing things.

19. Theatre: ‘Cuts could force a new political fringe’

Theatre has always had an edge, and any new fringe movement is likely to find it manifestation in art, be it theatre, song, poetry or painting. I would have preferred that the idea of art be taken up as a predication rather than theatre in isolation. If we continue to automate and displace workers, we will need to reassess our general abandonment of the arts as a way of making a living because creation will be the one thing that can’t be automated. We will need to find ways to pay people for human endeavors, everything from teaching to writing poetry. The fringe may turn out to be the way people stay engaged.

20 Storytelling: ‘Eventually there’ll be a Twitter classic’

Stories are already ubiquitous. We live in stories. Technology has changed our narrative form, not our longing for a narrative. The twitter stream is a narrative channel. I would not, however, anticipate a “twitter classic” because a classic suggests the idea of something lasting. For a “twitter classic” to occur, the 140-character phrases would need to be extracted from their medium and held someplace beyond the context is which they were created, which would make twitter just another version of the typewriter or word processor—either that or Twitter figures out a better mode for persistent retrieval of tweets with associated metadata—in others words, you could query the story out of the twitter-verse, which is very technically possible (and may make for some collaborative branching as well). But in the end, twitter is just a repository for writing, just one of many, which doesn’t make this prediction all that concept shattering.

This post is long enough, so I won’t start listing all of the areas the Guardian failed to tackle, or its internal lack of categorical consistency (e.g., Theatre and storytelling are two sides of the same idea). I hope these observations help you engage more deeply with these ideas and with the future more generally, but most importantly, I hope they help you think about navigating the next 25 years, not relying on prescience from people with no more insight than you and I. The trick with the future is to be nimble, not to be right.

Stories We Tell


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

8D Problem Solving for Transhumanists

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome

Stoic Philosophy and Human Immortality

The Stoic philosophical school shares several ideas with modern attempts at prolonging human lifespan. The Stoics believed in a non-dualistic, deterministic paradigm, where logic and reason formed part of their everyday life. The aim was to attain virtue, taken to mean human excellence.

I have recently described a model specifically referring to indefinite lifespans, where human biological immortality is a necessary and inevitable consequence of natural evolution (for details see www.elpistheory.info and for a comprehensive summary see http://cid-3d83391d98a0f83a.office.live.com/browse.aspx/Immo…=155370157).

This model is based on a deterministic, non-dualistic approach, described by the laws of Chaos theory (dynamical systems) and suggests that, in order to accelerate the natural transition from human evolution by natural selection to a post-Darwinian domain (where indefinite lifespans are the norm) , it is necessary to lead a life of constant intellectual stimulation, innovation and avoidance of routine (see http://www.liebertonline.com/doi/abs/10.1089/rej.2005.8.96?journalCode=rej and http://www.liebertonline.com/doi/abs/10.1089/rej.2009.0996) i.e. to seek human virtue (excellence, brilliance, and wisdom, as opposed to mediocrity and routine). The search for intellectual excellence increases neural inputs which effect epigenetic changes that can up-regulate age repair mechanisms.

Thus it is possible to conciliate the Stoic ideas with the processes that lead to both technological and developmental Singularities, using approaches that are deeply embedded in human nature and transcend time.

What’s Your Dream for the Future of California?

California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

California Dreams calls upon the public look 3–10 years into the future and tell a story about a single day in their own life. Videos, graphical entries, and stories will be accepted until January 15, 2011. Up to five winners will be flown to Palo Alto, California in March to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the $3,000 IFTF Roy Amara Prize for Participatory Foresight.

“We want to engage Californians in shaping their lives and communities” said Marina Gorbis, Executive Director of IFTF. “The California Dreams contest will outline the kinds of questions and dilemmas we need to be analyzing, and provoke people to ask deep questions.”

Entries may come from anyone anywhere and can include, but are not limited to, the following: Urban farming, online games replacing school, a fast food tax, smaller, sustainable housing, rise in immigrant entrepreneurs, mass migration out of state. Participants are challenged to use IFTF’s California Dreaming map as inspiration, and picture themselves in the next decade, whether it be a future of growth, constraint, transformation, or collapse.

The grand prize, called the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Gina Bianchini, Entrepreneur in Residence, Andreessen Horowitz

Alexandra Carmichael, Research Affiliate, Institute for the Future, Co-Founder, CureTogether, Director, Quantified Self

Bill Cooper, The Urban Water Research Center, UC Irvine

Poppy Davis, Executive Director, EcoFarm

Jesse Dylan, Founder of FreeForm, Founder of Lybba

Marina Gorbis, Executive Director, Institute for the Future

David Hayes-Bautista, Professor of Medicine and Health Services,UCLA School of Public Health

Jessica Jackley, CEO, ProFounder

Xeni Jardin, Partner, Boing Boing, Executive Producer, Boing Boing Video

Jane McGonigal, Director of Game Research and Development, Institute for the Future

Rachel Pike, Clean Tech Analyst, Draper Fisher Jurvetson

Howard Rheingold, Visiting Professor, Stanford / Berkeley, and theInstitute of Creative Technologies

Tiffany Shlain, Founder, The Webby Awards
Co-founder International Academy of Digital Arts and Sciences

Larry Smarr
Founding Director, California Institute for Telecommunications and Information Technology (Calit2), Professor, UC San Diego

DETAILS

WHAT: An online competition for visions of the future of California in the next 10 years, along one of four future paths: growth, constraint, transformation, or collapse. Anyone can enter, anyone can vote, anyone can change the future of California.

WHEN: Launch – October 26, 2010
Deadline for entries — January 15, 2011
Winners announced — February 23, 2011
Winners Celebration — 6 – 9 pm March 11, 2011 — open to the public

WHERE: http://californiadreams.org

For more information on the California Dreaming map or to download the pdf, click here.

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

Open Letter to Ray Kurzweil

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Once you understand this, you can apply your fame towards getting more people to use free software and Python. The reason so many know Linus Torvalds’s name is because he released his code as GPL, which is a license whose viral nature encourages people to work together. Proprietary software makes as much sense as a proprietary Wikipedia.

I would be happy to discuss any of this further.

Regards,

-Keith
—————–
Response from Ray Kurzweil 11/3/2010:

I agree with you that open source software is a vital part of our world allowing everyone to contribute. Ultimately software will provide everything we need when we can turn software entities into physical products with desktop nanofactories (there is already a vibrant 3D printer industry and the scale of key features is shrinking by a factor of a hundred in 3D volume each decade). It will also provide the keys to health and greatly extended longevity as we reprogram the outdated software of life. I believe we will achieve the original goals of communism (“from each according to their ability, to each according to their need”) which forced collectivism failed so miserably to achieve. We will do this through a combination of the open source movement and the law of accelerating returns (which states that the price-performance and capacity of all information technologies grows exponentially over time). But proprietary software has an important role to play as well. Why do you think it persists? If open source forms of information met all of our needs why would people still purchase proprietary forms of information. There is open source music but people still download music from iTunes, and so on. Ultimately the economy will be dominated by forms of information that have value and these two sources of information – open source and proprietary – will coexist.
———
Response back from Keith:
Free versus proprietary isn’t a question about whether only certain things have value. A Linux DVD has 10 billion dollars worth of software. Proprietary software exists for a similar reason that ignorance and starvation exist, a lack of better systems. The best thing my former employer Microsoft has going for it is ignorance about the benefits of free software. Free software gets better only as more people use it. Proprietary software is an inferior development model and an anathema to science because it hinders people’s ability to work together. It has infected many corporations, and I’ve found that PhDs who work for public institutions often write proprietary software.

Here is a paragraph from my writings I will copy here:

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

We’ve known approximately what a neural network should look like for many decades. We need “places” for people to work together to hash out the details. A free software repository provides such a place. We need free software, and for people to work in “official” free software repositories.

“Open source forms of information” I have found is a separate topic from the software issue. Software always reads, modifies, and writes data, state which lives beyond the execution of the software, and there can be an interesting discussion about the licenses of the data. But movies and music aren’t science and so it doesn’t matter for most of them. Someone can only sell or give away a song after the software is written and on their computer in the first place. Some of this content can be free and some can be protected, and this is an interesting question, but mostly this is a separate topic. The important thing to share is scientific knowledge and software.

It is true that software always needs data to be useful: configuration parameters, test files, documentation, etc. A computer vision engine will have lots of data, even though most of it is used only for testing purposes and little used at runtime. (Perhaps it has learned the letters of the alphabet, state which it caches between executions.) Software begets data, and data begets software; people write code to analyze the Wikipedia corpus. But you can’t truly have a discussion of sharing information unless you’ve got a shared codebase in the first place.

I agree that proprietary software is and should be allowed in a free market. If someone wants to sell something useful that another person finds value in and wants to pay for, I have no problem with that. But free software is a better development model and we should be encouraging / demanding it. I’ll end with a quote from Linus Torvalds:

Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others’ knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.
And that’s exactly the same issue with open source (free) vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

The world is screwed because while we have things like Wikipedia and Linux, we don’t have places for computer vision and lots of other scientific knowledge to accumulate. To get driverless cars, we don’t need any more hardware, we don’t need any more programmers, we just need 100 scientists to work together in SciPy and GPL ASAP!

Regards,

-Keith