Toggle light / dark theme

1) Unchargedness (Reissner disproved)
2) Arise more readily (string theory confirmed)
3) Are indestructible (Hawking disproved)
4) Are invisible to CERN’s detectors (CERN publication disconfirmed)
5) Slowest specimens will stay inside earth (conceded by CERN)
6) Enhanced cross section due to slowness (like cold neutrons)
7) Exponential growth inside earth (quasar-scaling principle)

The final weeks of 2012 will again double the danger that the earth is going to be shrunk to 2 cm after a delay of a few years. No one on the planet demands investigation. The African Journal of Mathematics did the most for the planet. I ask President Obama to demand a safety statement from CERN immediately. The planet won’t forget it. Nor will America the beautiful. P.S. I thank Tom Kerwick who deleted all my latest postings on Lifeboat for his demanding a “substantiated” posting. I now look forward to his response.

Appendage: “It may Interest the World that I just found T,L,M in Einstein’s 1913 paper on Nordström (“On the present state of the problem of gravitation”) – so that it can no longer be ignored. The result is inherited by the full-fledged theory of general relativity of 1915 but was no longer remembered to be implicit. I give this information to the planet to show that my black-hole results (easy production, no Hawking evaporation, exponential voraciousness) can no longer be ignored by CERN. They call for an immediate stop of the LHC followed by a safety conference. I renew my appeal to the politicians of the world, and especially President Obama, to support my plea. Everyone has the human right to be informed about a new scientific result that bears on her or his survival. I recommend http://www.pitt.edu/~jdnorton/papers/einstein-nordstroem-HGR3.pdf for background information” — 2nd Nov.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)

2012 has already been a bad omen when it comes to humankind solving the dangers ahead. Perhaps an early review will make next January 1 brighter.

There has been strong information questioning the existence of Hawkins Radiation, which was a major reason most scientists think Black Hole Collider research is safe, without any increase in a call for a safety conference. Once, due to classification keeping it away from the general public, there was a small debate whether the first atomic explosion would set off a chain reaction that would consume the earth. On March 1, 1954 the Lithium that was, for other purposes, put in what was intended to be a small Hydrogen Bomb test, created, by far, the dirtiest atomic explosion ever as the natives on Bikini Island woke up to two suns in the sky that morning. History would be different had the first tests gravely injured people. Eventually people in the future will look back at how humankind dealt with the possibility of instantly destroying itself, as more important, than how it dealt with slowly producing more doomsday-like weapons.

With genetic engineering the results are amazing, goats with hair thousands of times stronger than wool would offer some increased protection from its predators. Think what would happen if, 1 foot long, undigestible fibers possibly with some sharp spots gets accidental inbreed in goat meat, or very un-tasty animals spread in the wild throughout the ecosystem. In 2001 Genetic Insecticide intended only to protect corn to be used in animal feed spread by the winds and cross breading to all corn in the northern hemisphere. Bees drinking corn syrup from one discarded soda can can endanger an entire hive. Now there is fear of this gene getting into wheat, rice and all plants that don’t rely on insects in some way. The efforts to require food to be labeled for genetically modified ingredients doesn’t address the issue and may actually distract from warning of the dangers ahead.

There are some who say bad people want to play God and create a god particle, likewise some say evil Monsanto,with bad motives, is trying to prevent us from buying safe food. This attitude doesn’t help create a safer future, or empower those trying to rationally deal with the danger.

The next danger is the attempt to impose Helter Skelter on the world with an Islam baiting movie. To Coptic Christians there was to be a movie on them being mistreated in Egypt. Actors hired to perform in a movie about flying saucers landing 2000 years ago and helped a man who never needed a shave and had a far away look in his eyes who had a donkey who he loved, and didn’t know who his father was. Islam haters fund-raised to create a movie smearing bin Laden but tricking Muslims to see it with movie posters in Arabic.

Somehow despite the false claim of 100 Jewish donors, no one who looked Jewish and rich was attacked in Los Angeles. Prompt expose‘ by Coptic Christian religious leaders, instead of someone else, exposing the claim that the person who pretended to be a Coptic Christian refugee and a rich Jewish businessman was the same person. Quick response prevented attacks on Copts in Egypt. Every relative of the four Americans killed in Benghazi, didn’t want Romney to use their name for campaign purposes. It is amazing that of all the people killed in the riots around the world following this hate trailer none of them were Americans who wanted their relative’s name used to promote anger at Muslims.

It is a bad omen that this was looked upon by many as a free speech issue not a terror attack. When Lebanese Prime Minister Rafik Hariri was killed to cause tit for tat revenge killings by a van that was stolen a year earlier in Japan, the UN took charge of the investigation. When Charles Manson tried to impose a Helter Skelter race war on the earth he didn’t come close enough to warrant being punished for a separate crime. If these two previous terror attacks on the world had been done in a way that no one was killed in the initial attack, is the earth really dumb enough to discuss it as a free speech issue?

Michael Jackson’s sister, before he died, was alarmed claiming that Michael Jackson’s handlers were systematical putting him under stress to put her brother in harm’s way. Comrad Murray this year wants a new trial insisting that he never would have given Michael orally such a badly mixed dose of anesthesia, and as no one seems to remember he had been distracted by a call on his cell phone for an offer of an important business deal. The world is full of incidents where professionals commit a crime in such a complex , convoluted way that it is hard to prosecute as a crime. Perhaps all these incidents could be looked into again.

It would be helpful if those stereotyped as not being concerned speak out like a sky diver warning about the Collider or a atheist leader and/or smut dealer speaking out on the hate religious film attack calling for investigation and prosecution. This Lifeboat site can accomplish more when it joins in where stereotypically one wouldn’t expect it to.

January through October, 2012, hasn’t been a good omen, in humankind’s ability to solve its problems and deal with danger, perhaps doing a year in review in October instead of waiting till January will make next January’s review brighter.

Most blogs expire concerning comments, one can comment below months from now,

http://readersupportednews.org/pm-section/22-22/14022-am…lter-chain

http://richardkanepa.blogspot.com/2012/10/it-is-only-human-t…es_18.html

A systematic decay rate of white dwarf stars in the galaxy is possibly implicit in the data that the LSAG scientists of CERN just sent you and which you kindly forwarded to me.

This preliminary evidence is quite alarming. It allows one to extrapolate to the effects that the same causally to be implicated agent (black holes) has when produced on earth in ultra-slow form at CERN. This CERN attempts to do for 2 years – and with maximum luminosity during the remaining weeks of 2012.

Much as in nuclear fission the “cold neutrons” (slow neutrons) possess a much larger “cross-section” than fast ones, so the artificial “cold mini black holes” predictably possess a much larger cross section than their ultra-fast natural cousins in white dwarfs. Hence the nightmare of but a few years remaining to planet earth would be supported by empirical evidence for the first time.

Can you arrange for a first public dialog with CERN?

Thank you very much,

Sincerely yours,

Prof. Otto E. Rossler, University of Tubingen, Germany

P.S.: An Italian court just convicted 7 scientists for not having predicted an earth quake. This judgment will not prevail, I predict, because it amounts to clairvoyance requested from science by the court. CERN’s public behavior for 5 years belongs into an entirely different category, however, since they openly ignore an extant scientific proof of their actively causing the worst conceivable disaster. I give CERN the kind advice to stop collisions to date. And I thank the Lifeboat administration for leaving this text online for this is not a game. (Compare also a recent German-language newspaper article http://newsticker.sueddeutsche.de/list/id/1374980 .)

I proved that black holes are different – they can only grow in a runaway fashion inside matter.

So if anyone would produce them on earth, earth would be doomed as soon as one would stay. No one disputes this.

Nevertheless the biggest effort at producing them on earth is going to be made during the next 10 weeks. CERN stages it.

I do not ask CERN to stop it: I only ask CERN to explain why they do it.

AND I ASK EVERYONE TO LISTEN

The Kline Directive: Theoretical-Empirical Relationship (Part 4)

Posted in business, cosmology, defense, economics, education, engineering, nuclear weapons, particle physics, philosophy, physics, policy, scientific freedom, spaceTagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 11 Comments on The Kline Directive: Theoretical-Empirical Relationship (Part 4)

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical Relationship, & Technological Feasibility.

In this post I have updated the Interstellar Challenge Matrix (ICM) to guide us through the issues so that we can arrive at interstellar travel sooner, rather than later:

Interstellar Challenge Matrix (Partial Matrix)

Propulsion Mechanism Relatively Safe? Theoretical-Empirical Relationship?
Conventional Fuel Rockets: Yes, but susceptible to human error. Known. Theoretical foundations are based on Engineering Feasible Theories, and have been evolving since Robert Goddard invented the first liquid-fueled rocket in 1926.
Antimatter Propulsion: No. Extensive gamma ray production (Carl Sagan). Issue is how does one protect the Earth? Capable of an End of Humanity (EOH) event. Dependent on Millennium Theories. John Eades states in no uncertain terms that antimatter is impossible to handle and create.
Atomic Bomb Pulse Detonation: No, because (Project Orion) one needs to be able to manage between 300,000 and 30,000,000 atomic bombs per trip. Known and based on Engineering Feasible Theories.
Time Travel: Do Not Know. Depends on how safely exotic matter can be contained. Dependent on a Millennium Theory. Exotic matter hypotheses are untested. No experimental evidence to show that Nature allows for a breakdown in causality.
String / Quantum Foam Based Propulsion: Do Not Know. Depends on how safely exotic matter can be contained. Dependent on a Millennium Theory. String theories have not been experimentally verified. Exotic matter hypotheses are untested. Existence of Quantum Foam now suspect (Robert Nemiroff).
Small Black Hole Propulsion: No. Capable of an End Of Humanity (EOH) event Don’t know if small black holes really do exist in Nature. Their theoretical basis should be considered a Millennium Theory.

It is quite obvious that the major impediments to interstellar travel are the Millennium Theories. Let us review. Richard Feynman (Nobel Prize 1965) & Sheldon Lee Glashow (Nobel Prize 1979) have criticized string theory for not providing novel experimental predictions at accessible energy scales, but other theoretical physicists (Stephen Hawking, Edward Witten, Juan Maldacena and Leonard Susskind) believe that string theory is a step towards the correct fundamental description of nature. The Wikipedia article String Theory gives a good overview, and notes other critics and criticisms of string theories. In What is String Theory? Alberto Güijosa explains why string theories have come to dominate theoretical physics. It is about forces, and especially about unifying gravity with the other three forces.

Note, strings expand when their energy increases but the experimental evidence aka Lorentz-Fitzgerald transformations tell us that everything contracts with velocity i.e. as energy is increased.

In my opinion, the heady rush to a theory of everything is misguided, because there is at least one question that physics has not answered that is more fundamental than strings and particles. What is probability and how is it implemented in Nature?

Probabilities are more fundamental than particles as particles exhibit non-linear spatial probabilistic behavior. So how can one build a theory of everything on a complex structure (particles), if it cannot explain something substantially more fundamental (probabilities) than this complex structure? The logic defies me.

We can ask more fundamental questions. Is this probability really a Gaussian function? Experimental data suggests otherwise, a Var-Gamma distribution. Why is the force experienced by an electron moving in a magnetic field, orthogonal to both the electron velocity and the magnetic field? Contemporary electromagnetism just says it is vector cross product, i.e. it is just that way. The cross product is a variation of saying it has to be a Left Hand Rule or a Right Hand Rule. But why?

Is mass really the source of a gravitational field? Could it not be due to quark interaction? Can we device experiments that can distinguish between the two? Why do photons exhibit both wave and particle behavior? What is momentum, and why is it conserved? Why is mass and energy equivalent?

Can theoretical physicists construct theories without using the laws of conservation of mass-energy and momentum? That would be a real test for a theory of everything!

In my research into gravity modification I found that the massless formula for gravitational acceleration, g=τc2, works for gravity, electromagnetism and mechanical forces. Yes, a unification of gravity and electromagnetism. And this formula has been tested and verified with experimental data. Further that a force field is a Non Inertia (Ni) field, and is present where ever there is a spatial gradient in time dilations or velocities. This is very different from the Standard Model which requires that forces are transmitted by the exchange of virtual particles.

So if there is an alternative model that has united gravity and electromagnetism, what does that say for both string theories and the Standard Model? I raise these questions because they are opportunities to kick start research in a different direction. I answered two of these questions in my book. In the spirit of the Kline Directive can we use these questions to explore what others have not, to seek what others will not, to change what others dare not?

That is why I’m confident that we will have real working gravity modification technologies by 2020.

In concluding this section we need to figure out funding rules to ensure that Engineering Feasible and 100-Year Theories get first priority. That is the only way we are going to be able to refocus our physics community to achieve interstellar travel sooner rather than later.

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

The Kline Directive: Theoretical-Empirical Relationship (Part 3)

Posted in cosmology, defense, education, engineering, particle physics, philosophy, physics, policy, spaceTagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 17 Comments on The Kline Directive: Theoretical-Empirical Relationship (Part 3)

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts:

1. Legal Standing. 2. Safety Awareness. 3. Economic Viability. 4. Theoretical-Empirical Relationship. 5. Technological Feasibility.

In Part 1, we learned that Einstein was phenomenally successful because his work was deeply meshed with the experimental evidence of the day. In Part 2, we learned that to be successful at developing new useful theories and discovering new fundamental properties of Nature that will bring forth new interstellar travel technologies, we need to avoid hypotheses that are not grounded in experimental data, as these are purely mathematical conjectures.

In my book on gravity modification I classified physics hypotheses and theories into 3 categories, as follows:

A. Type 1: The Millennium Theories
These are theories that would require more than a 100 years and up to 1,000 years to prove or disprove. Mathematically correct but inscrutable with physical verifiable experiments, even in the distant future.

String and quantum gravity theories fall into this category. Why? If we cannot even figure out how to engineer-modify 4-dimensional spacetime, how are we going to engineer-modify a 5-, 6-, 9-, 11- or 23-dimensional universe?

How long would it take using string theories to modify gravity? Prof. Michio Kaku in his April 2008 Space Show interview had suggested several hundred years. Dr. Eric Davis in his G4TV interview had suggested more than 100 years maybe 200 years. So rightly, by their own admission these are Millennium Theories. It should be noted that Richard Feynman (Nobel Prize 1965) & Sheldon Lee Glashow (Nobel Prize 1979) were against string theory, but their opinions did not prevail.

Even hypotheses that conjecture time travel should be classified as Millennium Theories because they require ‘exotic’ matter. John Eades, a retired CERN senior scientist, in his article Antimatter Pseudoscience, states in no uncertain terms that antimatter is impossible to handle and create in real quantities. Then what about exotic matter?

For that matter any hypothesis that requires antimatter or exotic matter should be classified a Millennium Theory.

B. Type 2: The 100-Year Theories
These are theories that show promise of being verified with technologies that would require several decades to engineer, test and prove.

These types of theories do not lend themselves to an immediate engineering solution. The engineering solution is theoretically feasible but a working experiment or technology is some decades away, because the experimental or physical implementation is not fully understood.

Note there is this gap. We do not have 100-Year Theories in our repertoire of physical theories to keep the pipeline supplied with new and different ways to test the physical Universe.

C. Type 3: The Engineering Feasible Theories
These are theories that lend themselves to an engineering solution, today. They are falsifiable today, with our current engineering technologies. They can be tested and verified in the laboratory if one knows what to test for and how to test for these experimental observations.

Today Relativity falls into this category because we have the engineering sophistication to test Einstein’s theory, and it has been vindicated time and time again. But, there is a very big ‘but’. But Relativity cannot give us gravity modification or new propulsion theories, because it requires mass. We need to stand on Einstein’s shoulders to take the next step forward.

Therefore, if we are to become an interstellar civilization, in the spirit of the Kline Directive, we need to actively seek out and explore physics in such a manner as to bring forth Engineering Feasible and 100-Year Theories.

We need to ask ourselves, what can we do, to migrate the theoretical physics research away from Theory of Everything research to the new field of propulsion physics? Gravity modification is an example of propulsion physics. Here is the definition of gravity modification, from my book:

“Gravity modification is defined as the modification of the strength and/or direction of the gravitational acceleration without the use of mass as the primary source of this modification, in local space time. It consists of field modulation and field vectoring. Field modulation is the ability to attenuate or amplify a force field. Field vectoring is the ability to change the direction of this force field.”

Note by this definition requiring no mass, relativity, quantum mechanics and string theories cannot be used to theorize propulsion physics. Therefore, the urgent need to find genuinely new ways in physics, to achieve interstellar travel.

Can we get there? The new physics? To answer this question let me quote Dr. Andrew Beckwith, Astrophysicist, Ph.D.(Condensed Matter Theory) who wrote the Foreword to my book:

“I believe that Quantum Mechanics is an embedded artifact of a higher level deterministic theory, i.e. much in the same vein as G. t’Hooft, the Nobel prize winner. In this sense, what Benjamin has done is to give a first order approximation as to what Quantum Mechanics is actually a part of which may in its own way shed much needed understanding of the foundations of Quantum Mechanics well beyond the ‘Pilot model’ of DICE 2010 fame (this is a conference on the foundations of Quantum Mechanics and its extension given once every two years in Pisa , Italy, organized by Thomas Elze).”

Why does Dr. Andrew Beckwith reference quantum mechanics in a book on gravity modification?

Because my investigation into gravity modification led me to the conclusion that gravitation acceleration is independent of the internal structure of the particle. It does not matter if the particle consists of other particles, strings, pebbles or rocks. This led me to ask the question, so what is the internal structure of a photon? I found out that the photon probability is not Gaussian but a new distribution, Var-Gamma. Therefore I believe Robert Nemiroff’s three photon observation will be vindicated by other physicist-researchers sifting through NASA’s archives for gamma-ray burst.

Previous post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

http://www.cinemablographer.com/2012/06/last-night.html
… is too early a movie at the present time (although it is nice). I instead re-iterate that I cannot understand the stubbornness of a whole planet refusing to check whether the offered proof of danger contains an error or not. All of the planet’s media never behaved as irresponsibly before: to refuse checking is never intelligent or defensible in retrospect, or is it?

A German higher administrative court (OVG Münster, Az. 16 A 591/11) ruled definitively yesterday that the principle of reversal of the burden of proof is not applicable in this case: You have to prove that the potentially earth-eating black holes are actually being produced before you can lawfully object to the ongoing attempt at their production.

———————————————–

Let me re-iterate how I see the situation in a manner that is maximally self-critical in a Popperian sense:

The 28 years old Einstein wrote a paper in 1907 which contained a radically new prediction: clocks located more downstairs in gravity (like at the base of a high-rise building) are slower-ticking in a locally imperceptible way. The G.P.S. later agreed.

It took quite a number of years until the automatically existing corollaries to this breathtaking result became identified unequivocally: T (clock time) is accompanied by L (meter-stick length) and M (particle mass) and Ch (particle charge), all locally invisibly affected by the same factor, the first two going up, the last two going down. Since some people have difficulty remembering 4 items at once, I call this find Telemach (T, L, M, Ch) for short. Note that Telemachus helped his father Ulysses expel the suitors of his mother’s throne.

Telemach does not interfere with general relativity: it is implicit in it. But Telemach interferes with some allegedly physical implications of general relativity – Reissner-Nordstrom and Kerr-Newman amongst them. And the most famous post-Einsteinian black-hole theorems (including wormhole based time travel and Hawking evaporation) go down the drain as well. Thus there automatically is a strong lobby in existence against Telemach according to the motto “This is not true and if it is true we don’t believe it.”

Why do I insist on Telemach being important enough to deserve an attempt at falsification to be made by the scientific community? The reason is the looked-forward to hopeful production of black holes (“one every second”) at CERN in Switzerland. CERN’s scientists did publish a big paper a year ago to the effect that they did not find any. Unfortunately, shunned young Telemach predicts that CERN’s sensors cannot detect its most hoped-for success, black holes.

The prediction of an enhanced success rate combined with sensor blindness goes so much against the honor of CERN that they decided 4 years ago to sit it out. No update of the 2008 safety report appeared ever since, no talking to the press about dangers any more, no compliance with a court’s tactful kind request to admit a “safety conference” (January 27, 2011), no permission given to “CERN’s Young Scientist” to implement their invitation for a talk made to a CERN critic 2 ½ years ago, no permission for CERN’s sister organization (the United Nations Security Council) to conduct an investigation – while simultaneously the fact that this highest terrestrial body had been asked for help blocked all national parliaments from discussing the matter.

No one who is accused of committing a crime – this one could be the worst of history – can be expected to act differently. I sympathize with CERN. Few temptations for mortals are less inescapable. Imagine: you got ten billion dollars to test certain hypotheses with maximally shiny machines – and then it is revealed that if you follow the allotted task an entirely new risk is encountered: those who gave you the money will not be pleased. Will you have to give the money back if you hesitate continuing on schedule? In other words: no one would have acted differently in CERN’s place. They just had to crank up their attempt to generate these suddenly allegedly dangerous objects, every year and every month and every day – especially so during the last three scheduled months that we are living through right now before the machine will be dismantled for two years in favor of an almost doubled-in-performance successor after the end of the year 2012 .

So everyone fully understands CERN, both in the present situation and from the point of a future intelligence looking back (as we hope will exist). Only some blockheads who still believe in individual virtues like honesty and dignity are protesting: “Is this not a crime?” When the emperor asks you to die along with him, you have no choice but to comply.

Some parents love their children more than the emperor. I see no friend for the end of the world as of yet.

To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts:

1. Legal Standing. 2. Safety Awareness. 3. Economic Viability. 4. Theoretical-Empirical Relationship. 5. Technological Feasibility.

In Part 1 of this post I will explore Theoretical-Empirical Relationship. Not theoretical relationships, not empirical relationships but theoretical-empirical relationships. To do this let us remind ourselves what the late Prof. Morris Kline was getting at in his book Mathematics: The Loss of Certainty, that mathematics has become so sophisticated and so very successful that it can now be used to prove anything and everything, and therefore, the loss of certainty that mathematics will provide reasonability in guidance and correctness in answers to our questions in the sciences.

History of science shows that all three giants of science of their times, Robert Boyle, Isaac Newton & Christiaan Huygens believed that light traveled in aether medium, but by the end of the 19th century there was enough experimental evidence to show aether could not be a valid concept. The primary experiment that changed our understanding of aether was the Michelson–Morley experiment of 1887, which once and for all proved that aether did not have the correct properties as the medium in which light travels.

Only after these experimental results were published did, a then unknown Albert Einstein, invent the Special Theory of Relativity (SRT) in 1905. The important fact to take note here is that Einstein did not invent SRT out of thin air, like many non-scientists and scientists, today believe. He invented SRT by examining the experimental data to put forward a hypothesis or concept described in mathematical form, why the velocity of light was constant in every direction independent of the direction of relative motion.

But he also had clues from others, namely George Francis FitzGerald (1889) and Hendrik Antoon Lorentz (1892) who postulated length contraction to explain negative outcome of the Michelson-Morley experiment and to rescue the ‘stationary aether’ hypothesis. Today their work is named the Lorentz-Fitzgerald transformation.

So Einstein did not invent the Special Theory of Relativity (SRT) out of thin air, there was a body of knowledge and hypotheses already in the literature. What Einstein did do was to pull all this together in a consistent and uniform manner that led to further correct predictions of how the physics of the Universe works.

(Note: I know my history of science in certain fields of endeavor, and therefore use Wikipedia a lot, not as a primary reference, but as a starting point for the reader to take off for his/her own research.)

Previous post in the Kline Directive series.

Next post in the Kline Directive series.

—————————————————————————————————

Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.

Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.

Today is Felix-Baumgartner day since creativity wins. And today, I saw an interesting dialog about my potentially planet-saving results on the Internet. The latter was conducted by amateur physicists ( http://www.sciforums.com/showthread.php?113769-Invited-%28pe…-R%F6ssler ) who thereby have earned great merit since the whole rest of the profession refuses to come out.

The young colleagues tried to convince themselves and their readers that my “Telemach” result, which has the planet-saving potential if flawless, violates textbook and wiki wisdom and therefore is bound to be false.

Nevertheless I am very grateful to Mr. “rpenner” (pseudonym) and his friends for their being the only scientists so far who dare come out in a not totally anonymous way.

The emphasis they place on the Rindler metric at the beginning is especially meritorious. The Rindler metric is arguably the most important post-Einsteinian discovery. It implies the Telemach theorem – on the truthfulness of which the survival of the planet is predicated as no one denies.

But is the Rindler metric not well known and no one ever extracted fundamental new implications from it? Let me take this topic up for you.

The Rindler metric describes a one-light-year-long rocketship which at the tip has 1 g acceleration (earth’s gravity) and at the rear end has infinite acceleration. It consists of a very large number of “rocket rings” lined up between tip and bottom that all stick spontaneously together without touching because their constant accelerations vary in a lawfully graded manner. The best textbook still is Robert M. Wald’s “General Relativity” of 1984. It correctly reproduces (on page 151) the everywhere equal ticking times valid over the whole length of the ship – which, however, do not reproduce the local clocks’ readings, as the book correctly stresses. The local clocks rather tick more and more slowly towards the tail end to become effectively frozen there. This “local reality” of unit time intervals T inside the Rindler rocket has three corollaries: L (a meter stick’s length) is locally imperceptibly increased in proportion to T; M (a unit mass like that of an electron) is locally imperceptibly decreased by the same factor; and Ch (a unit charge) is likewise locally imperceptibly reduced in proportion.

This is maximally strange since the same rocketship – when briefly interrupted in its acceleration everywhere in the external simultaneity, while being momentarily at rest along the horizontal axis – is not infinitely long (only one light year long) and is not infinitely mass-reduced at the tail nor charge reduced there. Nevertheless these interior artifacts are “ontological”: An astronaut descending from the rocket’s tip inside is, after having been hauled back up again, indeed empirically younger on return in accordance with the equations. Thus, it is the internal picture T,L,M,Ch (“Telemach”) and not the external one which proves to be physically relevant.

My message is that this new ontology is presently being neglected by humankind at the risk of self-extinction. A “safety conference” is all that I am requesting for 4 years.

It will be my privilege – and perhaps not only mine – to learn more about these matters in the continued dialog with Mr. rpenner and his friends, Ms. Trooper and the others, on this forum here. Or if they so prefer, on theirs, to be mirrored here since the present discussion started out on Lifeboat. And if we are lucky will we even be granted a word of kind advice from grandmaster Wolfgang Rindler himself.