Toggle light / dark theme

FutureICT have submitted their proposal to the FET Flagship Programme, an initiative that aims to facilitate breakthroughs in information technology. The vision of FutureICT is to

integrate the fields of information and communication technologies (ICT), social sciences and complexity science, to develop a new kind of participatory science and technology that will help us to understand, explore and manage the complex, global, socially interactive systems that make up our world today, while at the same time paving the way for a new paradigm of ICT systems that will leverage socio-inspired self-organisation, self-regulation, and collective awareness.

The project could provide us with profound insights into societal behaviour and improve policymaking. The project echoes the Large Hadron Collider at CERN in its scope and vision, only here we are trying to understand the state of the world. The FutureICT project combines the creation of a ‘Planetary Nervous System’ (PNS) where Big Data will be collated and organised, a ‘Living Earth Simulator’ (LES), and the ‘Global Participatory Platform’ (GPP). The LES will simulate the data and provide models for analysis, while the GPP will provide the data, models and methods to everyone. People wil be able to collaborate and research in a very different way. The availability of Big Data to participants will both strengthen our ability to understand complex socio-economic systems, and it could help build a new dialogue between nations in how we solve complex global societal challenges.

FutureICT aim to develop a ‘Global Systems Science’, which will

lay the theoretical foundations for these platforms, while the focus on socio-inspired ICT will use the insights gained to identify suitable designs for socially interactive systems and the use of mechanism that have proven effective in society as operational principles for ICT systems.

It is exciting to think about the possible breakthroughs that could be made. What new insights and scientific discoveries could be made? What new technologies could emerge? The Innovation Accelerator (IA) is one feature of the venture that could create both disruptive technology and politics. Next year will open up a new world of possibilities. A possible project for the Lifeboat Foundation to be involved in.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

I cannot let the day pass without contributing a comment on the incredible ruling of multiple manslaughter on six top Italian geophysicists for not predicting an earthquake that left 309 people dead in 2009. When those who are entrusted with safeguarding humanity (be it on a local level in this case) are subjected to persecution when they fail to do so, despite acting in the best of their abilities in an inaccurate science, we have surely returned to the dark ages where those who practice science are demonized by the those who misunderstand it.

http://www.aljazeera.com/news/europe/2012/10/20121022151851442575.html

I hope I do not misrepresent other members of staff here at The Lifeboat Foundation, in speaking on behalf of the Foundation in wishing these scientists a successful appeal against a court ruling which has shocked the scientific community, and I stand behind the 5,000 members of the scientific community who sent an open letter to Italy’s President Giorgio Napolitano denouncing the trial. This court ruling was ape-mentality at its worst.

On January 28 2011, three days into the fierce protests that would eventually oust the Egyptian president Hosni Mubarak, a Twitter user called Farrah posted a link to a picture that supposedly showed an armed man as he ran on a “rooftop during clashes between police and protesters in Suez”. I say supposedly, because both the tweet and the picture it linked to no longer exist. Instead they have been replaced with error messages that claim the message – and its contents – “doesn’t exist”.

Few things are more explicitly ephemeral than a Tweet. Yet it’s precisely this kind of ephemeral communication – a comment, a status update, sharing or disseminating a piece of media – that lies at the heart of much of modern history as it unfolds. It’s also a vital contemporary historical record that, unless we’re careful, we risk losing almost before we’ve been able to gauge its importance.

Consider a study published this September by Hany SalahEldeen and Michael L Nelson, two computer scientists at Old Dominion University. Snappily titled “Losing My Revolution: How Many Resources Shared on Social Media Have Been Lost?”, the paper took six seminal news events from the last few years – the H1N1 virus outbreak, Michael Jackson’s death, the Iranian elections and protests, Barack Obama’s Nobel Peace Prize, the Egyptian revolution, and the Syrian uprising – and established a representative sample of tweets from Twitter’s entire corpus discussing each event specifically.

It then analysed the resources being linked to by these tweets, and whether these resources were still accessible, had been preserved in a digital archive, or had ceased to exist. The findings were striking: one year after an event, on average, about 11% of the online content referenced by social media had been lost and just 20% archived. What’s equally striking, moreover, is the steady continuation of this trend over time. After two and a half years, 27% had been lost and 41% archived.

Continue reading “The decaying web and our disappearing history”

I have been meaning to read a book coming out soon called Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. It’s written by Harvard biologist George Church and science writer Ed Regis. Church is doing stunning work on a number of fronts, from creating synthetic microbes to sequencing human genomes, so I definitely am interested in what he has to say. I don’t know how many other people will be, so I have no idea how well the book will do. But in a tour de force of biochemical publishing, he has created 70 billion copies. Instead of paper and ink, or pdf’s and pixels, he’s used DNA.

Much as pdf’s are built on a digital system of 1s and 0s, DNA is a string of nucleotides, which can be one of four different types. Church and his colleagues turned his whole book–including illustrations–into a 5.27 MB file–which they then translated into a sequence of DNA. They stored the DNA on a chip and then sequenced it to read the text. The book is broken up into little chunks of DNA, each of which has a portion of the book itself as well as an address to indicate where it should go. They recovered the book with only 10 wrong bits out of 5.27 million. Using standard DNA-copying methods, they duplicated the DNA into 70 billion copies.

Scientists have stored little pieces of information in DNA before, but Church’s book is about 1,000 times bigger. I doubt anyone would buy a DNA edition of Regenesis on Amazon, since they’d need some expensive equipment and a lot of time to translate it into a format our brains can comprehend. But the costs are crashing, and DNA is a far more stable medium than that hard drive on your desk that you’re waiting to die. In fact, Regenesis could endure for centuries in its genetic form. Perhaps librarians of the future will need to get a degree in biology…

Link to Church’s paper

Source

One question that fascinated me in the last two years is, can we ever use data to control systems? Could we go as far as, not only describe and quantify and mathematically formulate and perhaps predict the behavior of a system, but could you use this knowledge to be able to control a complex system, to control a social system, to control an economic system?

We always lived in a connected world, except we were not so much aware of it. We were aware of it down the line, that we’re not independent from our environment, that we’re not independent of the people around us. We are not independent of the many economic and other forces. But for decades we never perceived connectedness as being quantifiable, as being something that we can describe, that we can measure, that we have ways of quantifying the process. That has changed drastically in the last decade, at many, many different levels.

Continue reading “Thinking in Network Terms” and watch the hour long video interview

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

I spend most of my time thinking about software, and occasionally I come across issues that are relevant to futurists. I wrote my book about the future of software in OpenOffice, and needed many of its features. It might not be the only writing / spreadsheet / diagramming / presentation, etc. tool in your toolbox, but it is a worthy one. OpenDocument Format (ODF) is the best open standard for these sorts of scenarios and LibreOffice is currently the premier tool to handle that format. I suspect many of the readers of Lifeboat have a variant installed, but don’t know much of the details of what is going on.

The OpenOffice situation has been a mess for many years. Sun didn’t foster a community of developers around their work. In fact, they didn’t listen to the community when it told them what to do. So about 18 months ago, after Oracle purchased Sun and made the situation worse, the LibreOffice fork was created with most of the best outside developers. LibreOffice quickly became the version embraced by the Linux community as many of the outside developers were funded by the Linux distros themselves. After realizing their mess and watching LibreOffice take off within the free software community, Oracle decided to fire all their engineers (50) and hand the trademark and a copy of the code over to IBM / Apache.

Now it would be natural to imagine that this should be handed over to LibreOffice, and have all interested parties join up with this effort. But that is not what is happening. There are employees out there whose job it is to help Linux, but they are actually hurting it. You can read more details on a Linux blog article I wrote here. I also post this message as a reminder about how working together efficiently is critical to have faster progress on complicated things.

How hard is to assess which risks to mitigate? It turns out to be pretty hard.

Let’s start with a model of risk so simplified as to be completely unrealistic, yet will still retain a key feature. Suppose that we managed to translate every risk into some single normalized unit of “cost of expected harm”. Let us also suppose that we could bring together all of the payments that could be made to avoid risks. A mitigation policy given these simplifications must be pretty easy: just buy each of the “biggest for your dollar” risks.

Not so fast.

The problem with this is that many risk mitigation measures are discrete. Either you buy the air filter or you don’t. Either your town filters its water a certain way or it doesn’t. Either we have the infrastructure to divert the asteroid or we don’t. When risk mitigation measures become discrete, then allocating the costs becomes trickier. Given a budget of 80 “harms” to reduce, and risks of 50, 40, and 35, then buying the 50 leaves 15 “harms” that you were willing to pay to avoid left on the table.

Alright, so how hard can this be to sort this out? After all, just because going big isn’t always the best for your budget, doesn’t mean it isn’t easy to figure out. Unfortunately, this problem is also known as the “0−1 knapsack problem”, which computer scientists know to be NP-complete. This means that there isn’t any known process to find exact solutions that are polynomial in the size of the input, thus requiring looking through a good portion of the potential solution combinations, taking an exponential amount of time.

What does this tell us? First of all, it means that it isn’t appropriate to expect all individuals, organizations, or governments to make accurate comparative risk assessments for themselves, but neither should we discount the work that they have done. Accurate risk comparisons are hard won and many time-honed cautions are embedded in our insurance policies and laws.

However, as a result of this difficulty, we should expect that certain short-cuts are made, particularly cognitive short-cuts: sharp losses are felt more sharply, and have more clearly identifiable culprits, than slow shifts that erode our capacities. We therefore expect our laws and insurance policies to be biased towards sudden unusual losses, such as car accidents and burglaries, as opposed to a gradual increase in surrounding pollutants or a gradual decrease in salary as a profession becomes obsolete. Rare events may also not be included through processes of legal and financial adaptation. We should also expect them to pay more attention to issues we have no “control” over, even if the activities we do control are actually more dangerous. We should therefore be particularly careful of extreme risks that move slowly and depend upon our own activities, as we are naturally biased to ignore them compared to more flashy and sudden events. For this reason, models, games, and simulations are very important tools for risk policy. For one thing, they make these shifts perceivable by compressing them. Further, as they can move longer-term events into the short-term view of our emotional responses. However, these tools are only as good as the information they include, so we also need design methodologies that aim to broadly discover information to help avoid these biases.

The discrete, “all or nothing” character of some mitigation measures has another implication. It also tells us that we wouldn’t be able to make implicit assessments of how much individuals of different income levels value their lives by the amount they are willing to pay to avoid risks. Suppose that we have some number of relatively rare risks, each having a prevention stage, in which the risks have not manifested in any way, and a treatment stage, in which they have started to manifest. Even if the expected value favors prevention over treatment in all cases, if one cannot pay for all such prevention, then the best course in some cases is to pay for very few of them, leaving a pool of available resources to treat what does manifest, which we do not know ahead of time.

The implication for existential and other extreme risks is we should be very careful to clearly articulate what the warning signs for each of them are, for when it is appropriate to shift from acts of prevention to acts of treatment. In particular, we should sharply proceed with mitigating the cases where the best available theories suggest there will be no further warning signs. With existential risks, the boundary between remaining flexible and needing to commit requires sharply different responses, but with unknown tipping points, the location of the boundary is fuzzy. As a lack of knowledge knows no prevention and will always manifest, only treatment is feasible, so acting sharply to build our theories is vital.

We can draw another conclusion by expanding on how the model given at the beginning is unrealistic. There is no such thing as a completely normalized harm, as there are tradeoffs between irreconcilable criteria, the evaluation of which changes with experience across and within individuals. Even temporarily limiting an analysis to standard physical criteria (say lives), rare events pose a problem for actuarial assessment, with few occurrences giving poor bounds on likelihood. Existential risks provide no direct frequencies, nor opportunity for an update in Bayesian belief, so we are left to an inductive assessment of the risk’s potential pathways.

However, there is also no single pool for mitigation measures. People will form and dissolve different pools of resources for different purposes as they are persuaded and dissuaded. Therefore, those who take it upon themselves to investigate the theory leading to rare and one-pass harms, for whatever reason, provide a mitigation effort we might not rationally take for ourselves. It is my particular bias to think that information systems for aggregating these efforts and interrogating these findings, and methods for asking about further phenomena still, are worth the expenditure, and thus the loss in overall flexibility. This combination of our biases leads to a randomized strategy for investigating unknown risks.

In my view, the Lifeboat Foundation works from a similar strategy as an umbrella organization: one doesn’t have to yet agree that any particular risk, mitigation approach, or desired future is the one right thing to pursue, which of course can’t be known. It is merely the bet that pooling those pursuits will serve us. I have some hope this pooling will lead to efforts inductively combining the assessments of disparate risks and potential mitigation approaches.