Toggle light / dark theme

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss the second of three concepts, that if implemented should speed up the rate of innovation and discovery so that we can achieve interstellar travel within a time frame of decades, not centuries. Okay, I must remind you that this will probably upset some physicists.

One of the findings of my 12-year study was that gravitational acceleration was independent of the internal structure of a particle, therefore, the elegantly simple formula, g=τc2, for gravitational acceleration. This raised the question, what is the internal structure of a particle? For ‘normal’ matter, the Standard Model suggests that protons and neutrons consist of quarks, or other mass based particles. Electrons and photons are thought to be elementary.

I had a thought, a test for mass as the gravitational source. If ionized matter showed the same gravitational acceleration effects as non-ionized matter, then one could conclude that mass is the source of gravitational acceleration, not quark interaction; because the different ionizations would have different electron mass but the same quark interaction. This would be a difficult test to do correctly because the electric field effects are much greater than gravitational effects.

One could ask, what is the internal structure of a photon? The correct answer is that no one knows. Here is why. In electromagnetism, radio antenna’s specifically, the energy inside the hollow antenna is zero. However, in quantum theory, specifically the nanowire for light photons, the energy inside the nanowire increases towards the center of the nanowire. I’m not going to provide any references as I not criticizing any specific researcher. So which is it?

One could ask the question, at what wavelength does this energy distribution change, from zero (for radio waves) to an increase (for light photons)? Again, this is another example of the mathematics of physics providing correct answers while being inconsistent. So we don’t know.

To investigate further, I borrowed a proposal from two German physicists, I. V. Drozdov and A. A. Stahlhofen, (How long is a photon?) who had suggested that a photon was about half a wavelength long. I thought, why stop there? What if it was an infinitely thin slice? Wait. What was that? An infinitely thin slice! That would be consistent with Einstein’s Special Theory of Relativity! That means if the photon is indeed an infinitely thin pulse, why do we observe the wave function that is inconsistent with Special Theory of Relativity? That anything traveling at the velocity of light must have a thickness of zero, as dictated by the Lorentz-Fitzgerald transformations.

The only consistent answer I could come up with was that the wave function was the photon’s effect or the photon’s disturbance on spacetime, and not the photon itself.

Here is an analogy. Take a garden rake, turn it upside down and place it under a carpet. Move it. What do you see? The carpet exhibits an envelope like wave function that appears to be moving in the direction the garden rake is moving. But the envelope is not moving. It is a bulge that shows up wherever the garden rake is. The rake is moving but not the envelope.

Similarly, the wave function is not moving and therefore spreads across the spacetime where the photon is. Now both are consistent with Einstein’s Special Theory of Relativity. Then why is the Standard Model successful? It is so because just as the bulge is unique to the shape of the garden rake, so are the photon’s and other particles’ wave function disturbances of spacetime are unique to the properties of the photon & respective particles.

In my book, this proposed consistency with Special Theory of Relativity points to the existence of subspace, and a means to achieve interstellar travel.

There are a lot of inconsistencies in our physical theories, and we need to start addressing these inconsistencies if we are to achieve interstellar travel sooner rather than later.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss three concepts, that if implemented should speed up the rate of innovation and discovery so that we can achieve interstellar travel within a time frame of decades, not centuries.

Okay, what I’m going to say will upset some physicists, but I need to say it because we need to resolve some issues in physics to distinguish between mathematical construction and conjecture. Once we are on the road to mathematical construction, there is hope that this will eventually lead to technological feasibility. This post is taken from my published paper “Gravitational Acceleration Without Mass And Noninertia Fields” in the peer reviewed AIP journal, Physics Essays, and from my book An Introduction to Gravity Modification.

The Universe is much more consistent than most of us (even physicists) suspect. Therefore, we can use this consistency to weed out mathematical conjecture from our collection of physical hypotheses. There are two set of transformations that are observable. The first, in a gravitational field at a point where acceleration is a compared to a location at 0 an infinite distance from the gravitational source, there exists Non-Linear transformations Γ(a) which states that time dilation ta/t0, length contraction x0/xa, and mass increase ma/m0, behave in a consistent manner such that:

(1)

.

The second consistency is Lorentz-Fitzgerald transformations Γ(v) which states that at a velocity v compared to rest at 0, time dilation tv/t0, length contraction x0/xv, and mass increase mv/m0, behave in a consistent manner such that:

(2)

.

Now here is the surprise. The Universe is so consistent that if we use the Non-Linear transformation, equation (1) to calculate the free fall velocity (from infinity) to a certain height above the planet’s or star’s surface, and it’s corresponding time dilation, we find that it is exactly what the Lorentz-Fitzgerald transformation, equation (2) requires. That there is this previously undiscovered second level of consistency!

You won’t find this discovery in any physics text book. Not yet anyway. I published this in my 2011 AIP peer reviewed Physics Essays paper, “Gravitational Acceleration Without Mass And Noninertia Fields”.

Now let us think about this for a moment. What this says is that the Universe is so consistent that the linear velocity-time dilation relationship must be observable where ever velocity and time dilation is present, even in non-linear spacetime relationships where acceleration is present and altering the velocity and therefore the time dilation present.

Or to put it differently, where ever Γ(a) is present the space, time, velocity and acceleration relationship must allow for Γ(v) to be present in a correct and consistent manner. When I discovered this I said, wow! Why? Because we now have a means of differentiating hypothetical-theoretical gravitational fields, and therefore mathematical conjectures, from natural-theoretical gravitational fields, which are correct mathematical constructions.

That is, we can test the various quantum gravity & string hypotheses and any of the tensor metrics! Einstein’s tensor metrics should be correct, but from a propulsion perspective there is something more interesting, Alcubierre tensor metrics. Alcubierre was the first, using General Relativity, to propose the theoretical possibility of warp speed (note, not how to engineer it). Alcubierre’s work is very sophisticated. However, the concept is elegantly simple. That one can wrap a space craft in gravitational-type deformed spacetime to get it to ‘fall’ in the direction of travel.

The concept suggest that both equations (1) and (2) are no longer valid as the relative velocity between the outer edges of the spacetime wrap and an external observer is either at c, the velocity of light or greater – one needs to do the math to get the correct answer. Even at an acceleration of 1g, and assuming that this craft has eventually reached c, equation (1) and (2) are no longer consistent. Therefore, my inference is that Alcubierre metrics allows for zero time dilation within the wrap but not velocities greater than the velocity of light. Therefore, it is also doubtful that Dr. Richard Obousy hypothesis that it is possible to achieve velocities of 1E30c with a quantum string version of Alcubierre warp drive is correct.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this set of posts I discuss three concepts. If implemented these concepts have the potential to bring about major changes in our understanding of the physical Universe. But first a detour.

In my earlier post I had suggested that both John Archibald Wheeler and Richard Feynman, giants of the physics community, could have asked different questions (what could we do differently?) regarding certain solutions to Maxwell’s equations, instead of asking if retrocausality could be a solution.

I worked 10 years for Texas Instruments in the 1980s & 1990s. Corporate in Dallas, had given us the daunting task of raising our Assembly/Test yields from 83% to 95%, within 3 years, across 6,000 SKUs (products), with only about 20+ (maybe less) engineers, and no assistance from Dallas. Assembly/Test skills had moved offshore, therefore, Dallas was not in a position to provide advice. I look back now and wonder how Dallas came up with the 95% number.

Impossibly daunting because many of our product yields were in the 70+%. We had good engineers and managers. The question therefore was how do you do something seemingly impossible, without changing your mix of people, equipment and technical skills sets?

Let me tell you the end first. We achieved 99% to 100% Assembly/Test yields across the board for 6,000 SKUs within 3 years. And this, in a third world nation not known for any remarkable scientific or engineering talent! I don’t have to tell you what other lessons we learned from this as it should be obvious. So me telling Dr. David Neyland, of DARPA’s TTOI’ll drop a zero” at the first 100YSS conference in 2011, still holds.

How did we do it? For my part I was responsible for Engineering Yield (IT) Systems, test operation cost modeling for Overhead Transfer Pricing, and tester capacity models to figure out how to increase test capacity. But the part that is relevant to this discussion was team work. We organized the company into teams, brought in consultants to teach what team work was and how to arrive at and execute operational and business decisions as teams.

And one of the keys to team work was to allow anyone and everyone to speak up. To voice their opinions. To ask questions, no matter how strange or silly those questions appeared to be. To never put down another person because he/she had different views.

Everyone from the managing director of the company down to the production operators were organized into teams. Every team had to meet once a week. To ask those questions. To seek those answers. That was some experience, working with and in those teams. We found things we did not know or understand about our process. That in turn set off new & old teams to go figure! We understood the value of a matrix type organization.

As a people not known for any remarkable scientific and engineering talent, we did it! Did the impossible. I learned many invaluable lessons from my decade at Texas Instruments that I’ll never forget and will always be grateful for.

My Thanksgiving this year is that I am thankful I had the opportunity to work for Texas Instruments when I did.

So I ask, in the spirit of the Kline Directive, can we as a community of physicists and engineers come together, to explore what others have not, to seek what others will not, to change what others dare not, to make interstellar travel a reality within our lifetimes?

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I will explore Technological Feasibility. At the end of the day that is the only thing that matters. If a hypothesis is not able to vindicate itself with empirical evidence it will not become technologically feasible. If it is not technologically feasible then it stands no chance of becoming commercially viable.

If we examine historical land, air and space speed records, we can construct and estimate of velocities that future technologies can achieve, aka technology forecasting. See table below for some of the speed records.

Year Fastest Velocity Craft Velocity (km/h) Velocity (m/s)
2006 Escape Earth New Horizons 57,600 16,000
1976 Capt. Eldon W. Joersz and Maj. George T. Morgan Lockheed SR-71 Blackbird 3,530 980
1927 Car land speed record (not jet engine) Mystry 328 91
1920 Joseph Sadi-Lecointe Nieuport-Delage NiD 29 275 76
1913 Maurice Prévost Deperdussin Monocoque 180 50
1903 Wilbur Wright at Kitty Hawk Wright Aircraft 11 3

A quick and dirty model derived from the data shows that we could achieve velocity of light c by 2151 or the late 2150s. See table below.

Year Velocity (m/s) % of c
2200 8,419,759,324 2808.5%
2152 314,296,410 104.8%
2150 274,057,112 91.4%
2125 49,443,793 16.5%
2118 30,610,299 10.2%
2111 18,950,618 6.3%
2100 8,920,362 3.0%
2075 1,609,360 0.5%
2050 290,351 0.1%
2025 52,384 0.0%

The extrapolation suggests that on our current rate of technological innovation we won’t achieve light speed until the late 2150s. The real problem is that we won’t achieve 0.1c until 2118! This is more than 100-years from today.

In my opinion this rate of innovation is too slow. Dr. David Neyland, of DARPA’s TTO was the driving force behind DARPA’s contribution to the 100-year Starship Study. When I met up with Dr. David Neyland during the first 100YSS conference, Sept. 30 to Oct 2, 2011, I told him “I’ll drop a zero”. That is I expect interstellar travel to be achievable in decades not centuries. And to ramp up our rate of technological innovation we need new theories and new methods of sifting through theories.

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

It may have gone unnoticed to most, but the first expedition for mankind’s first permanent undersea human colony will begin in July of next year. These aquanauts represent the first humans who will soon (~2015) move to such a habitat and stay with no intention of ever calling dry land their home again. Further details: http://underseacolony.com/core/index.php

Of all 100 billion humans who have ever lived, not a single human has ever gone undersea to live permanently. The Challenger Station habitat, the largest manned undersea habitat ever built, will establish the first permanent undersea colony, with aspirations that the ocean will form a new frontier of human colonization. Could it be a long-term success?

The knowledge gained from how to adapt and grow isolated ecosystems in unnatural environs, and the effects on the mentality and social well-being of the colony, may provide interesting insights into how to establish effective off-Earth colonies.

One can start to pose the questions — what makes the colony self-sustainable? What makes the colony adaptive and able to expand its horizons. What socio-political structure works best in a small inter-dependent colony? Perhaps it is not in the first six months of sustainability, but after decades of re-generation, that the true dynamics become apparent.

Whilst one does not find a lawyer, a politician or a management consultant on the initial crew, one can be assured if the project succeeds, it may start to require other professions not previously considered. At what size colony does it become important to have a medical team, and not just one part-time doctor. What about teaching skills and schooling for the next generation to ensure each mandatory skill set is sustained across generations. In this light, it could become the first social project in determining the minimal crew balance for a sustainable permanent off-Earth Lifeboat. One can muse back to the satire of the Golgafrincham B Ark in Hitch-Hiker’s Guide to the Galaxy, where Golgafrinchan Telephone Sanitisers, Management Consultants and Marketing executives were persuaded that the planet was under threat from an enormous mutant star goat, packed in Ark spaceships, and sent to an insignificant planet… which turned out to be Earth. It provides us a satirical remind that the choice of crew and colony on a real Lifeboat would require utmost social research.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss the third and final part, Concepts and Logical Flow, of how to read or write a journal paper, that is not taught in colleges.

A paper consists of a series of evolving concepts expressed as paragraphs. If a concept is too complex to be detailed in a single paragraph, then break it down into several sub-concept paragraphs. Make sure there is logical evolution of thought across these sub-concepts, and across the paper.

As a general rule your sentences should be short(er). Try very hard not to exceed two lines of Letter or A4 size paper at font size 11. Use commas judicially. Commas are not meant to extend sentences or divide the sentence into several points!!! They are used to break up a sentence into sub-sentences to indicate a pause when reading aloud. How you use commas can alter the meaning of a sentence. Here is an example.

And this I know with confidence, I remain and continue …

Changing the position of the commas, changes the meaning to

And this I know, with confidence I remain and continue …

We see how ‘confidence’ changes from the speaker’s assessment of his state of knowledge, to the speaker’s reason for being. So take care.

When including mathematical formulae, always wrap. Wrap them with an opening paragraph and a closing paragraph. Why? This enhances the clarity of the paper. The opening paragraph introduce the salient features of the equation(s), i.e. what the reader needs to be aware of in the equation(s), or an explanation of the symbols, or why the equation is being introduced.

The closing paragraph explains what the author found by stating the equations, and what the reader should expect to look for in subsequent discussions, or even why the equation(s) is or is not relevant to subsequent discussions.

Many of these concept-paragraphs are logically combined into sections, and each section has a purpose for its inclusion. Though this purpose may not always be stated in the section, it is important to identify what it is and why it fits in with the overall schema of the paper.

The basic schema of a paper consists of an introduction, body and conclusion. Of course there are variations to this basic schema, and you need to ask the question, why does the author include other types of sections.

In the introduction section(s) you summarize your case, what is your paper about, and what others have reported. In the body sections you present your work. In the conclusion section you summarize your findings and future direction of the research. Why? Because a busy researcher can read your introduction and conclusion and then decide whether your paper is relevant to his or her work. Remember we are working within a community of researchers in an asynchronous manner, an asynchronous team, if you would. As more and more papers are published every year, we don’t have the time to read all of them, completely. So we need a method of eliminating papers we are not going to read.

An abstract is usually a summary of the body of the paper. It is difficult to do well and should only be written after you have completed your paper. That means you are planning ahead and have your paper written and abstracts completed when you receive the call for papers.

An abstract tells us if the paper could be relevant, to include in our list of papers to be considered for the shortlist of papers to be read. The introduction and conclusion tells if the paper should be removed from our short list. If the conclusion fits in with what we want to achieve, then don’t remove the paper from the short list.

I follow a rule when writing the introduction section. If I am writing to add to the body of consensus, I state my case and then write a review of what others have reported. If I am negating the body of consensus, then I write a review of what others have reported, and then only do I state my case of why not.

As a general rule, you write several iterations of the body first, then introduction and finally the conclusion. You’d be surprised by how your thinking changes if you do it this way, This is because you have left yourself open to other inferences that had not crossed your mind from the time you completed your work, to the time you started writing your paper.

If someone else has theoretical or experimental results that apparently contradicts your thesis, then discuss why and why not, and you might end up changing your mind. It is not a ‘sin’ to include contradictory results, but make sure you discuss this intelligently and impartially.

Your work is the sowing and growing period. Writing the paper is the harvesting period. What are you harvesting? Wheat, weeds or both? Clearly the more wheat you harvest the better your paper. The first test for this is the logical flow of your paper. If it does not flow very well, something is amiss! You the author, and you the reader beware! There is no substitute but to rethink your paper.

The second test is, if you have tangential discussions in your paper that seem interesting but are not directly relevant. Prune, prune & prune. If necessary split into multiple concise papers. A concise & sharp paper that everyone remembers is more valuable than a long one that you have to plough through.

Go forth, read well and write more.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationships, and Technological Feasibility.

In this post I discuss part 2 of 3, Mathematical Construction versus Mathematical Conjecture, of how to read or write a journal paper that is not taught in colleges.

I did my Master of Arts in Operations Research (OR) at the best OR school in the United Kingdom, University of Lancaster, in the 1980s. We were always reminded that models have limits to their use. There is an operating range within which a model will provide good and reliable results. But outside that operating range, a model will provide unreliable, incorrect and even strange results.

Doesn’t that sound a lot like what the late Prof. Morris Kline was saying? We can extrapolate this further, and ask our community of theoretical physicists the question, what is the operating range of your theoretical model? We can turn the question around and require our community of theoretical physicists to inform us or suggest boundaries of where their models fail “ … to provide reasonability in guidance and correctness in answers to our questions in the sciences …”

A theoretical physics model is a mathematical construction that is not necessarily connected to the real world until it is empirically verified or falsified, until then these mathematical constructions are in limbo. Search the term ‘retrocausality’ for example. The Wikipedia article Retrocausality says a lot about how and why of the origins of theoretical physics models that are not within the range of our informed common sense. Let me quote,

“The Wheeler–Feynman absorber theory, proposed by John Archibald Wheeler and Richard Feynman, uses retrocausality and a temporal form of destructive interference to explain the absence of a type of converging concentric wave suggested by certain solutions to Maxwell’s equations. These advanced waves don’t have anything to do with cause and effect, they are just a different mathematical way to describe normal waves. The reason they were proposed is so that a charged particle would not have to act on itself, which, in normal classical electromagnetism leads to an infinite self-force.”

John Archibald Wheeler and Richard Feynman are giants in the physics community, and these esteemed physicists used retrocausality to solve a mathematical construction problem. Could they not have asked the different questions? What is the operating range of this model? How do we rethink this model so as not to require retrocausality?

This unfortunate leadership in retrocausality has led to a whole body of ‘knowledge’ by the name of ‘retrocausality’ that is in a state of empirical limbo and thus, the term mathematical conjecture applies.

Now, do you get an idea of how mathematical construction leads to mathematical conjecture? Someone wants to solve a problem, which is a legitimate quest because that is how science progresses, but the solution causes more problems (not questions) than previously, which leads to more physicists trying to answer those new problems, and so forth .… and so forth .… and so forth .…

In Hong Kong, the Cantonese have an expression “chasing the dragon”.

Disclaimer: I am originally from that part of the world, and enjoyed tremendously watching how the Indian and Chinese cultures collided, merged, and separated, repeatedly. Sometimes like water and oil, and sometimes like water and alcohol. These two nations share a common heritage, the Buddhist monks, and if they could put aside their nationalistic and cultural pride, who knows what could happen?

Chasing the dragon in the Chinese cultural context “refers to inhaling the vapor from heated morphine, heroin, oxycodone or opium that has been placed on a piece of foil. The ‘chasing’ occurs as the user gingerly keeps the liquid moving in order to keep it from coalescing into a single, unmanageable mass. Another more metaphorical use of the term ‘chasing the dragon’ refers to the elusive pursuit of the ultimate high in the usage of some particular drug.”

Solving a mathematical equation always gives a high, and discovering a new equation gives a greater high. So when we write a paper, we have to ask ourselves, are we chasing the dragon of mathematical conjecture or chasing the dragon of mathematical construction? I hope it is the latter.

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

A recent article in Science Daily reported on efforts to measure Cesium-137 and Cesium-134 in bottom dwelling fish off the east coast of Japan to understand the lingering effects and potential public health implications. As the largest accidental release of radiation to the ocean in history, it is not surprising that many demersal fish are found above the limits for seafood consumption. What is more significant is that the contamination in almost all classifications of fish are not declining — suggesting that contaminated sediment on the seafloor could be providing a continuing source. This raises a concern that fallouts from any further nuclear accidents would aggregate over time.

One would question if the IAEA is taking a strong enough position on the permitted location of nuclear power stations. It perplexes me that the main objections to Iran attaining nuclear power are strategic/military. Whilst Iran is not at risk to the threat of tsunamis as Japan is, Iran is one of the most seismically active countries in the world, where destructive earthquakes often occur. This is because it is crossed by several major fault lines that cover at least 90% of the country. How robust are nuclear power stations to a major quake? The IAEA needs to expand its role to advise countries on what regions it would be unsuitable to build nuclear power stations — such as Iran and Japan. Otherwise we are risking a lasting environmental impact to eventually occur — it is only a matter of time.

How the Diablo Canyon nuclear plant, which sits just miles away from the notoriously active San Andreas fault was allowed to be located there let alone operate for a year and half with its emergency systems disabled (according to a 2010 safety review by the federal Nuclear Regulatory Commission). It seems as if there’s a missing link worldwide between the IAEA and regional planning authorities. Or perhaps it is simply down to responsible government.

New whitepaper/critique on Nuclear Industrial Safety — International Nuclear Services Putting Buisness Before Safety and Other Essays on Nuclear Safety — Asserts specific concern over the 2038 clock-wrap issue in old UNIX/IBM Control Systems. This is an aggregation of previous contributions to Lifeboat Foundation on the topic of Nuclear Safety.

http://environmental-safety.webs.com/apps/blog/

http://environmental-safety.webs.com/nuclear_essays.pdf

Comments welcome.