Toggle light / dark theme

On a casual read of the appraised work of Duncan R. Lorimer on Binary and Millisecond Pulsars (2005) last week, I noted the reference to the lack of pulsars with P < 1.5 ms. It cites a mere suggestion that this is due to gravitational wave emission from R-mode instabilities, but one has not offered a solid reason for such absence from our Universe. As the surface magnetic field strength of such would be lower (B ∝ (P ˙P )^(1÷2)) than other pulsars, one could equally suggest that the lack of sub millisecond pulsars is due to their weaker magnetic fields allowing CR impacts resulting in stable MBH capture… Therefore if one could interpret that the 108 G field strength adopted by G&M is an approximate cut-off point where MBH are likely to be captured by neutron stars, then one would perhaps have some phenomenological evidence that MBH capture results in the destruction of neutron stars into black holes. One should note that more typical values of observed neutron stars calculate a 1012 G field, so that is a 104 difference from the borderline-existence cases used in the G&M analysis (and so much less likely to capture). That is not to say that MBH would equate to a certain danger for capture in a planet such as Earth where the density of matter is much lower — and accretion rates much more likely to be lower than radiation rates — an understanding that is backed up by the ‘safety assurance’ in observational evidence of white dwarf longevity. However, it does take us back to question — regardless of the frequently mentioned theorem here on Lifeboat that states Hawking Radiation should be impossible — Hawking Radiation as an unobserved theoretical phenomenon may not be anywhere near as effective as derived in theoretical analysis regardless of this. This oft mentioned concern of ‘what if Hawking is wrong’ of course is endorsed by a detailed G&M analysis which set about proving safety in the scenario that Hawking Radiation was ineffective at evaporating such phenomenon. Though doubts about the neutron star safety assurance immediately makes one question how reliable are the safety assurances of white dwarf longevity – and my belief has been that the white dwarf safety assurance seems highly rational (as derived in a few short pages in the G&M paper and not particularly challenged except for the hypothesis that they may have over-estimated TeV-scale MBH size which could reduce their likelihood of capture). It is quite difficult to imagine a body as dense as a white dwarf not capturing any such hypothetical stable MBH over their lifetime from CR exposure – which validates the G&M position that accretion rates therein must be vastly outweighed by radiation rates, so the even lower accretion rates on a planet such as Earth would be even less of a concern. However, given the gravity of the analysis, those various assumptions on which it is based perhaps deserves greater scrutiny, underscored by a concern made recently that 20% of the mass/energy in current LHC collisions are unaccounted for. Pulsars are often considered one of the most accurate references in the Universe due to their regularity and predictability. How ironic if those pulsars which are absent from the Universe also provided a significant measurement. Binary and Millisecond Pulsars, D.R. Lorimer: http://arxiv.org/pdf/astro-ph/0511258v1.pdf

Hawking radiation is dead ever since the Telemach result and its precursors surfaced on the web. No one ever defended Hawking including his own heroic voice.

The same holds true for CERN’s detectors. They are blind to its most touted anticipated success – black hole production – by virtue of the said theorem. Again not a single word of defense.

This is why a court asked CERN and the world for a safety conference on January 27, 2011.

The press cannot continue shielding the world, and Lifeboat must be relieved from its having to carry the burden of informing an otherwise lifeboat-less planet, singlehandedly.

Russia’s hastily convened international conference in St. Petersburg next month is being billed as a last-ditch effort at superpower cooperation in defense of Earth against dangers from space.

But it cannot be overlooked that this conference comes in response to the highly controversial NATO anti-ballistic missile deployments in Eastern Europe. These seriously destabilizing, nuclear defenses are pretexted as a defense against a non-nuclear Iran. In reality, the western moves of anti-missile systems into Poland and Romania create a de facto nuclear first-strike capability for NATO, and they vacate a series of Anti-Ballistic Missile Treaties with the Russians that go back forty years.

Deeply distrustful of these new US and NATO nuclear first-strike capabilities, the Russians announced they will not attend NATO’s planned deterrence summit in Chicago this month. Instead, they are testing Western intentions with a proposal for cooperative project for near-space mapping, surveillance, and defense against Earth-crossing asteroids and other dangerous space objects.

The Russians have invited NATO members as well as forward-thinking space powers to a conference in June in Petrograd. The agenda: Planetary defense against incursions by objects from space. It would be a way of making cooperative plowshares from the space technologies of hair-trigger nuclear terror (2 minutes warning, or less, in the case of the Eastern European ABMs).

It’s an offer the US and other space powers should accept.

Telemach Makes Black Holes dangerous– No Suitor ready to Disarm Him as of Yet

The T is uncontroversial: no one questions that clock rate T is reduced more downstairs in the way described by Einstein in 1907 – his “happiest thought” as he always said. But if the clocks are indeed ontologically slower-ticking down there (as the gravitational twin clocks experiment implicit in the G.P.S. proves to the eye every day), then other physical quantities valid down there, besides clock rate T, are automatically affected by the same Einstein factor: Length L, mass M and charge Ch. This is the T-L-M-Ch theorem.

Metrologists are responsible for the famous Ur-meter, the famous Ur-kilogram (quite expensive) and the well-known unit Ur-charge of electrons. The whole profession is keeping a low profile at present for being unable to defend the three dethroned constants against the onslaught of the Telemach revolution. The Ur-kilogram is ready to be auctioned at Sotheby’s. All distances in the universe have acquired new values while several new constants of nature have arisen and Einstein’s constant “c” has become a global constant. The field has greatly won in clarity.

It would be too nice if more colleagues cared to contribute to the obtained more consistent picture of general relativity – independently described with a wealth of new formulae by Richard J. Cook (see his paper “Gravitational space dilation”). The implied connection to the properties of black holes makes the new results even more exciting. I pledge that doctoral students be allowed to work in newly promising branch of physics.

Imagine: a Whole Planet Betting its own Survival on your being Wrong

What I showed is that Einstein’s happiest thought – that clocks on a lower floor tick more slowly – possesses 3 corollaries (impossible to spot in 1907): size and mass and charge are affected by the same factor (the former going up, the latter two going down). No colleague on the planet objects to “Telemach,” as the result involving T, L, M, Ch is called.

But the planet accepts like sheep that the LHC experiment continues: This even though the most hoped-for products – artificial black holes – have become more probable; undetectable to its sensors; and last-not-least are going to shrink the earth to 2 cm in perhaps 5 years.

No one believes any more in big progress being possible through meticulous thought today. But: must really every child’s life be bet on this current complacency?

Dear Cologne Administrative Court: thank you for having endorsed the necessity of the “safety conference” in your final statement made unto CERN on January 27, 2011.

I Am mildly Disappointed that None of the Young Scientists Tries to Get…

… a piece of the cake by elaborating on Telemach. I say so not because the Telemach theorem is a major new result affecting the planetary safety of the famous LHC-experiment at CERN, Switzerland, but because he or she thereby re-opens the door to fame to the “planetary” young generation like the members of Neil Turok’s genius school in South Africa. (Telemach was published in Africa.)

By the way: Dismantling Telemach is no less rewarding a task – it is actually the outcome I would personally prefer in view of the danger that its correctness brings with it. But to either end, one first has to understand it, which apparently none of the more senior physicists and mathematicians of the planet has so far achieved. The explanation for this in my eyes lies in the intimidating simplicity of Telemach. (I am by the way not alone in having found the theorem: Richard J. Cook of Colorado Springs was earlier with T,L,M but kindly acknowledges Ch.) Now it is your turn.

Why do I expect to be taken seriously by being given the benefit of the doubt? It is because I care. Only human beings know about truth because only humans can trust each other about facts. It is because of the invention-out-of-nothing, made at a very young age, of the suspicion of benevolence being extended towards them. This invention turns them into a person because only a person can understand benevolence.

So the refusal by CERN to offer a counterproof to the presented proof that they are playing with fire (a big fire) violates my rights as a person. The benefit of the doubt is a human right to solicit – especially so in science which rests on nothing else.

My friend Tom Kerwick has a result whose proof contains a loophole if I am not mistaken, but it takes time to come to the point with him. He therefore believes the danger were not there and innocently censors my best blogs. Maybe he will talk to me after this one.

But the real question is: What is benevolence? How come a planet can become dependent on the essence of benevolence being understood? Is it not well understood by the human society? Amazingly, this is not the case.

I have an “animal model” or an artificial-intelligence model if you prefer. It presupposes that you believe in the brain equation, or more generally that human beings and other vertebrates etc. are autonomous optimizers governed by an optimality functional shaped by evolution. (The underlying mathematical problem happens to be well-defined; it is a variant to the famous traveling salesman problem.)

If so, human beings are “just” autonomous optimizers? All animals are. But what, then, is special about humans? Answer: their being persons. Are they persons from birth? No – only person-competent from birth. When do they become persons? At the moment they first invent the suspicion of benevolence. For benevolence is a person-property. How does the suspicion arise? Through a creative misunderstanding (which then turns out to be none): By their being rewarded by an adult’s displayed happiness about their own being happy at this moment.

But: Is this not exactly the same with a young wolf who is rewarded by the tail-wagging of a feeding adult who is rewarded by the youngster’s tail-wagging?

The answer is in the negative (independently of the still unknown answer to the question of whether a puppet can already be rewarded by an adult’s tail-wagging). But is not the tail-wagging an expression both of affection and of happiness – just as this holds true for the smile-laughter in the human species? This is correct. Then: why do not dogs insist on truthfulness, too?

What wolves lack compared to humans is mirror-competence. On the other hand, all of the other mirror-competent animals known (elephants, apes, dolphins, magpies) are non-rewardable by the displayed joy of the compagnon. This trait, common to wolves and humans as we saw, is a maximally rare consequence of random evolutionary “ritualization” in the sense of Huxley’s.

But: We could substitute for that particular lacking trait artificially in the individual interaction with a young bonding animal taken from one of the just-named species. A sperm whale has the largest and most complex brain on earth and thus is hardware-wise the most intelligent creature in the universe at the present state of human knowledge. Leo Szilard still thought this was the dolphin (in his fictional story “The day of the dolphins” written shortly after the catastrophe of the dropped bombs which he had made possible and then tried in vain to hold back).

The trick with the dolphins or any other of the named species: to consistently combine our own bonding signal of smile-laughter, in the interaction with the genuinely loved foster child from the other species, with the natural bonding expression of that species. Then the same “misunderstanding” of benevolence being suddenly suspected will again arise just as it does in the human playroom.

But this would mean that these animals are just as person-competent as human babies? This is correct. Understanding the love radiated by the dream-of-life giving instance (in sync with Mom‘s smile) is not a human prerogative it appears. A higher personal intelligence than ours is artificially achievable – without being man-made and without any artificial hardware required, decades before Ray Kurzweil’s proven “singularity” but in the same loving spirit.

Now maybe you hate me for being that optimistic – unless you have an autistic child who suddenly can be causally rescued from his smile-blindness (which is easy to substitute for as we saw – acoustically in our case). But I am here in the pledging position, not the giving position as I mentioned at the beginning: by my insisting on my own person right of not being denied the answer to a maximally serious question posed to the scientific community: Why is my danger-proving result not cogent – is anyone able to come up with a counter-proof?

We can leave A.I. at this point (there is a whole book on him titled “Neosentience”) since an even more life-saving issue is in the background in which I need your help: My begging the scientific community for the benefit of the doubt towards my Telemach result (a second charming youth beside Spielberg’s A.I.): Telemach says that black holes have radically different properties than are innocuously presupposed by the makers of the CERN experiment.

Why can I be so sure that I am right? It is because no one could offer a counterargument up until now. Except for saying that some accepted pieces of knowledge are then no longer valid as I had shown – which they do not believe but cannot defend against my given proof.

Okay: if I am sure so far – what does this mean? I am clamoring for the benefit of the doubt by not counteracting to my results until they have been disproved. Like the famous chimpanzee mother in the documentary who desperately waves with her arms to prevent the human aggressor from shooting – in vain. For continuing the LHC experiment in Switzerland is maximally risky if I am right. But who am I to request the human right to be falsified before being overrun?

Is this not maximally absurd: a single person requesting humankind to listen to him? It would be absurd if I had not “bent over backwards” in the words of Richard Feynman to make it maximally easy to show that I am wrong – if I am wrong. And decisively it is not absurd because of the unfathomably large consequences if my result is counteracted when true.

So I am a terrorist holding a threat in my hands? It is the other way round: The planet is holding a “device” in its hands and is denying me the right to warn it in time that the toy machine is loaded. But my friend Tom Kerwick has a counterargument to offer as he says? I am all ears and eyes to be presented with it in a way I understand – so far, I admit, I have been too stupid to see its cogency.

I am ready to cancel all my warnings with my most humble apologies if he is right. Which should be easy to find out since his argument only has to do with large numbers of particles generated and with hypothetical diameters of quarks and miniature black holes. I publicly apologize for having used up so much space on Lifeboat so far if he succeeds in making me understand his point and the latter survives to the best of my understanding. Please, dear Tom: do not erase this one. I am all on your side, only a bit slow – okay?

Einstein realized in the last decade of his life that only a world government can overcome war and hatred on the planet. And he believed he had acquired the right to demand this acutely – in view of the nuclear winter being a real threat in the wake of his own contributions to physics.

His main discovery, however, is the “twin clocks paradox,” overlooked by even his greatest competitor. It describes, not just a physical discovery but much more. The travelled twin got transported along the time axis at a different (reduced) rate. So he will be standing younger-in-age beside his twin brother upon return. This is an ontological change which no one else would have dared consider possible: Interfering with the inexorable fist that pushes us all forward along the time axis!

This is Einstein’s deepest discovery. He topped it only once: when he discovered, two years later in 1907, that clocks “downstairs” are rate-reduced, too. The “second twins paradox” in effect.

The word “paradox” is a misnomer: “Miracle” is the correct word. Imagine staying the hands of time! So everybody sees that what you worked is a miracle (a Western Shaman presenting a tangible feat – a Grimms’ brothers’ fairy tale brought to life – a Jewish miracle revived: “the Lord can be seen”).

Why do I point you to Einstein, the sorcerer? It is because we’d better listen to him. Presently, the whole planet denies his legacy as once before. Deliberately to overlook his second twins paradox amounts to consciously risking the planet for the second time in a row.

The ontologically slowed clocks (downstairs) are not just slower-ticking: they also are proportionally mass-reduced, size increased and charge-reduced. This corollary to Einstein’s 1907 result, called Telemach (since T, L, M, Ch are involved), stays uncontested.

Unfortunately – or rather fortunately –, a famous nuclear experiment turns out to be planet-threatening in time I hope. Technically speaking the second twins paradox implies that CERN’s presently attempted to be produced artificial black holes, # 1) cannot be detected at CERN, # 2) are more likely to arise, #3) will, owing to quantum mechanics, electromagnetism and chaos theory, eat the planet inside-out in a few years’ time so that only a 1.8 cm black residue remains.

So dangerous is Einstein still, 57 years after his passing away? This time around, he is imploring us again while taking off his glasses and smiling into the camera: “please, dear children, do not continue a nuclear experiment that you cannot monitor while ontological implications stand on the list.”

The safety conference,rejected by the Cologne Administrative Court on January 27, 2011, is number 1 on Einstein’s agenda:

The nuclear experiment must be stopped immediately!

The nascent world government is openly asking for this today: This is “Einstein’s miracle.”

[Disclaimer: This contribution does not reflect the views of the Lifeboat Foundation as with the scientific community in general, but individual sentiment — Web Admin]

There is not the slightest alleviation of danger so far. All I can record so far is a stalling in favor of letting CERN continue till the end of the year – its present goal. No immediate safety discussion with CERN is planned by any organization if I am told correctly.

I would very much like to understand the mechanism: How is it possible that so many grown-up persons collude in a game of hide-and-seek: What do they gain by refusing to think and, most of all, discuss?

Their neglect of rationality is unprecedented. Imagine: A whole profession being too weak to find a single counterargument against the reproach of trying to vaporize the planet into a black hole in a few years’ time – with not a single member speaking up in objection!

Historians of the future – whom I congratulate if still existing – will be unable to believe their records: A phase of humankind’s history in which the fear of sizing up a danger in time vastly outshines the danger itself – even though the latter is infinite.

Is this really a planet of grown-up persons who have children whom they ought to care for? Please, dear director Heuer of CERN: Do kindly present to the planet a single scientist who defends your stance by a scientific argument offered against the published proof of danger. For those who forgot:

1) black holes do not evaporate (Telemach)
2) black holes are uncharged and hence un-sticky and undetectable at CERN (Telemach)
3) minimal black holes must be very small for leaving white dwarfs unscathed when produced on their surface at nearly the speed of light (observation)
4) black holes leave neutron stars unscathed owing to the latters’ superfluidity (quantum)
5) black holes grow exponentially inside matter by the quasar hierarchy theorem (chaos)
Any of these 5 results if shown to be false exculpates CERN with respect to black-holes

Or maybe you can formulate a counterargument yourself, dear director Heuer?

The avid reader of Lifeboat may have noticed that the debate on LHC safety assurances has recently swerved here towards discussion on astronomical phenomenology — mainly the continued existence of white dwarfs and neutron stars.

The detailed G&M safety report naturally considers both of these, and considers hypothetical stable MBH capture rates based on a weak CR background flux. It actually overlooks better examples of white dwarfs which are part of a binary pair such as Sirius B, the little companion to one of our closest and brightest stars, Sirius A.

One could argue that white dwarfs are not greatly understood — but the relevant factors to the safety debate are quite understood — density, mass, escape velocity, and approximate age of such observed phenomenon. Only magnetic field effects are up for debate.

If Sirius B captured even one such MBH due to CR bombardment from its companion star in the first say 20 million years of its existence — and it would be difficult to argue that it would not — then that MBH would be accreting for the last 100 million years, through far denser material, and most likely at a much higher velocity, than any MBH captured in the Earth due to LHC collisions. Therefore, given the continued existence of Sirius B, accretion rates would therefore have to be incredibly slow and there would be no significant threat to Earth from what would be a much slower MBH accretion rate here.

In this context, any argument promoting the oft rubbished T-L-M-Ch theorem actually provides us with a safe assurance, in the knowledge that accretion rates must be negligible, that there is also no risk of any heating/micro-explosive effect due to Hawking Radiation, as Telemach refutes HR. In this context it is quite a paradox that Prof O.E. Rossler who derived Telemach has championed it as a safety concern…

In the volatile world we live in today it is unfortunate that other issues are over-dominated by the debate on the safety of one particular industry here that may be no threat whatsoever. It was with pleasure I read The Chaos Point — The World at the Crossroads by Ervin Laszlo recently. As far as I recall, our particle colliders hardly got a mention at all.

And finally to share a very low-key ‘Earth Day’ gig in my local town last weekend I was happy to attend ‘(a) choose or create a pledge (b) once committed you must try and stick to your pledge to the end and © try to start an eco revolution’: It’s a wonderful world.

Relating Black Holes to Old Faithful exploding into a huge volcano and other disasters

Some on this site think there is something unique about the Black Hole controversy. It does affect the whole planet. But most people don’t consider humancide worse than genocide, and humancide not as bad as destroying all life. Americans and Canadians might suffer in an almost total way if Old Faithful geyser and Yellowstone National Park becomes a newly active volcano. http://www.phillyimc.org/en/bee-colony-collapse-and-dealing-disaster

How hard is to assess which risks to mitigate? It turns out to be pretty hard.

Let’s start with a model of risk so simplified as to be completely unrealistic, yet will still retain a key feature. Suppose that we managed to translate every risk into some single normalized unit of “cost of expected harm”. Let us also suppose that we could bring together all of the payments that could be made to avoid risks. A mitigation policy given these simplifications must be pretty easy: just buy each of the “biggest for your dollar” risks.

Not so fast.

The problem with this is that many risk mitigation measures are discrete. Either you buy the air filter or you don’t. Either your town filters its water a certain way or it doesn’t. Either we have the infrastructure to divert the asteroid or we don’t. When risk mitigation measures become discrete, then allocating the costs becomes trickier. Given a budget of 80 “harms” to reduce, and risks of 50, 40, and 35, then buying the 50 leaves 15 “harms” that you were willing to pay to avoid left on the table.

Alright, so how hard can this be to sort this out? After all, just because going big isn’t always the best for your budget, doesn’t mean it isn’t easy to figure out. Unfortunately, this problem is also known as the “0−1 knapsack problem”, which computer scientists know to be NP-complete. This means that there isn’t any known process to find exact solutions that are polynomial in the size of the input, thus requiring looking through a good portion of the potential solution combinations, taking an exponential amount of time.

What does this tell us? First of all, it means that it isn’t appropriate to expect all individuals, organizations, or governments to make accurate comparative risk assessments for themselves, but neither should we discount the work that they have done. Accurate risk comparisons are hard won and many time-honed cautions are embedded in our insurance policies and laws.

However, as a result of this difficulty, we should expect that certain short-cuts are made, particularly cognitive short-cuts: sharp losses are felt more sharply, and have more clearly identifiable culprits, than slow shifts that erode our capacities. We therefore expect our laws and insurance policies to be biased towards sudden unusual losses, such as car accidents and burglaries, as opposed to a gradual increase in surrounding pollutants or a gradual decrease in salary as a profession becomes obsolete. Rare events may also not be included through processes of legal and financial adaptation. We should also expect them to pay more attention to issues we have no “control” over, even if the activities we do control are actually more dangerous. We should therefore be particularly careful of extreme risks that move slowly and depend upon our own activities, as we are naturally biased to ignore them compared to more flashy and sudden events. For this reason, models, games, and simulations are very important tools for risk policy. For one thing, they make these shifts perceivable by compressing them. Further, as they can move longer-term events into the short-term view of our emotional responses. However, these tools are only as good as the information they include, so we also need design methodologies that aim to broadly discover information to help avoid these biases.

The discrete, “all or nothing” character of some mitigation measures has another implication. It also tells us that we wouldn’t be able to make implicit assessments of how much individuals of different income levels value their lives by the amount they are willing to pay to avoid risks. Suppose that we have some number of relatively rare risks, each having a prevention stage, in which the risks have not manifested in any way, and a treatment stage, in which they have started to manifest. Even if the expected value favors prevention over treatment in all cases, if one cannot pay for all such prevention, then the best course in some cases is to pay for very few of them, leaving a pool of available resources to treat what does manifest, which we do not know ahead of time.

The implication for existential and other extreme risks is we should be very careful to clearly articulate what the warning signs for each of them are, for when it is appropriate to shift from acts of prevention to acts of treatment. In particular, we should sharply proceed with mitigating the cases where the best available theories suggest there will be no further warning signs. With existential risks, the boundary between remaining flexible and needing to commit requires sharply different responses, but with unknown tipping points, the location of the boundary is fuzzy. As a lack of knowledge knows no prevention and will always manifest, only treatment is feasible, so acting sharply to build our theories is vital.

We can draw another conclusion by expanding on how the model given at the beginning is unrealistic. There is no such thing as a completely normalized harm, as there are tradeoffs between irreconcilable criteria, the evaluation of which changes with experience across and within individuals. Even temporarily limiting an analysis to standard physical criteria (say lives), rare events pose a problem for actuarial assessment, with few occurrences giving poor bounds on likelihood. Existential risks provide no direct frequencies, nor opportunity for an update in Bayesian belief, so we are left to an inductive assessment of the risk’s potential pathways.

However, there is also no single pool for mitigation measures. People will form and dissolve different pools of resources for different purposes as they are persuaded and dissuaded. Therefore, those who take it upon themselves to investigate the theory leading to rare and one-pass harms, for whatever reason, provide a mitigation effort we might not rationally take for ourselves. It is my particular bias to think that information systems for aggregating these efforts and interrogating these findings, and methods for asking about further phenomena still, are worth the expenditure, and thus the loss in overall flexibility. This combination of our biases leads to a randomized strategy for investigating unknown risks.

In my view, the Lifeboat Foundation works from a similar strategy as an umbrella organization: one doesn’t have to yet agree that any particular risk, mitigation approach, or desired future is the one right thing to pursue, which of course can’t be known. It is merely the bet that pooling those pursuits will serve us. I have some hope this pooling will lead to efforts inductively combining the assessments of disparate risks and potential mitigation approaches.