Toggle light / dark theme

Some people say that a calorie restriction (CR) diet is difficult to follow. It used to be. But things have changed: Thanks to great work by leading scientists, current approaches to calorie restriction are just as much about cell signaling as about limiting calories.

It is known, for example, that serious long-term CR dramatically lowers insulin levels.1 Another hormone, with a similar molecular structure, insulin-like growth factor one (IGF-I), shares the same pathway with insulin and is downregulated by CR in animal studies and by calorie restricted humans who do not follow high protein diets.2

And there’s the rub. For if you hope to benefit from calorie restriction and do not pay attention to the special properties of macronutrient intake, individual foods, and food preparation, you may get an unpleasant surprise: excessive stimulation of the insulin/IGF-I pathway. For example, in a study using healthy volunteers, just 50 grams of white potato starch sends glucose and insulin soaring3 to levels associated with increased risk of cancer, heart disease and diabetes.4

Back in the 1930s, when the term calorie restriction was first applied to Dr. Clive McCay’s rat and mouse experiments,5 it was entirely appropriate because the focus was on calories since he was looking at growth retardation. Of course, little was known about the signals involved in the life-extending effects of the diet. All that changed as scientists discovered important cell-signaling patterns that produce the phenomenal life-transforming effects.6

In 2008, The CR Way took the latest CR science and crafted it into a holistic lifestyle that makes following a CR diet easier by transforming it into a happy, positive lifestyle that focuses on living better now and quite possibly living longer. Recipes, food choices, and lifestyle are deliciously and strategically planned to reduce the insulin / IGF-I pathway activity – making disease risk plummet, while increasing the probability of a longer life.
# # #
__________
1. Fontana L, Meyer T.E., Klein S, Holloszy J.O. Long-Term Calorie Restriction Is Highly Effective In Reducing The Risk For Atherosclerosis In Humans. Proceedings of the National Academy of Science USA 2004;101(17):6659–6663.
2. Fontana L, Klein S, Holloszy J.O. Long-term low-protein low-calorie diet and endurance exercise modulate metabolic factors associated with cancer risk. American Journal of Clinical Nutrition. 2006;84:1456–62.
3.Brand-Miller JC, et al. Mean changes in plasma glucose and insulin responses in 10 young adults after consumption of 50g carbohydrates from potato (high-glycemic index; GI) or barley (low-GI) meal. American Journal of Clinical Nutrition. 2005 Aug;82(2):350–4
4. Guideline for Management of Post-meal Glucose, International Diabetes Federation, 2007 ISBN 2−930229−48−9
5. McCay CM, Crowell MF, Maynard LA. Journal of Nutrition. l0:63–79, 1935
6. McGlothin PS, Averill MS. Advances in Calorie restriction. Antiaging Medicine. 2009 Aug;4(4):440–441

I got this tweet today, as a part of a larger conversation that technological breakthroughs could help predict disruptive economic times. During the past 10 days or so, the US and global financial markets have taken a deep plung, as a result of, well, according to the CIOs (chief investment officers) and politicians, we don’t know. The new industry pins the almost unanimous economic decisions of sell sell sell, to the latest new is geo-political interactions and/or financial specific news.

We see headlines like “Downgrade Ignites a Global Selloff” at the Wall Street Journal, referring to the Standard & Poor’s downgraded credit rating of US treasuries, which by the way soared during the selloff of equities, because of their relative strength.

More importantly than what to buy, none of the headlines, nor the vague analysis captures the actual root-cause of this regular, or rather, irregular economic downturn occurring over the past decade. The general ideal that one should be able to buy low and sell high that once held true in the 20th century no longer exists. The root of the problem is in our use of technologies to error-proof redundant problems in the modern work world. Further, we know that mostly all errors exist by the hands of humans. Thus, error-proofing can be synonymous with human-proofing. We usually think of technologies that replace human activity as a device or software…”the robots”, and those do exist, but they are less of a threat than the methodological technologies.

We rarely think of a routine as a technology, but they are. Benchmarking is a technology. With all of the methodological expertise being poured into corporations over the past 30 years, we’ve finally got somewhere, efficient. How many times have you heard that word at the office? Since the late 1960s and the creation of Poka Yoke by Dr. Shigeo Shingo, and on to Lean-Manufacturing, and Six-Sigma, and most recently the 3rd version of IT Infrastructure Library (ITILv3), we are actively depleting the work force to ensure our qualitative (effectiveness) and quantitative (efficiency) superiority to the competition.

Its a difficult dialogue to have, because a valid argument is: what’s wrong with business being efficient? My answer would be: Nothing at all. The problem comes into play when human-kind has rendered its ability to distribute value, obsolete. In the past we’ve distributed value through a currency of some sort, and that currency (in primitive times and modern day) is backed by more than gold or bonds, it is also backed by faith in a philosophical system that a woman/man get paid for an “honest days work”, quite the primitive slogan. In a knowledge economy where people aren’t performing back-breaking work at the volumes that they used to, and 10 knowledge workers of the 1980’s can be performed by 1 Project Manager using 30 years of benchmarked data with soft/hardware help, it’s difficult to spread the wealth that we once did.

When the markets sell off equities into cash, they are saying that the economy is inflated and weak. There are no buyers for the products being produced, because there are no jobs. There are no jobs, because of all of the error-proofing that proceeded them; and finally, it is exceedingly difficult to quantify what people’s knowledge, experience, existence is worth in the old paradigm. While it feels better to point the finger at the CEOs and Politicians today, of which I’d likely get a finger or two, the problem is that we are trying to distribute the wealth that still exists using an antiquated model.

If one looks at the M1&M2 numbers at the US Federal Reserve, they’ll notice that all of the money we need to fix/build anything still exists. This is the same across the globe. When the news says that money supply is lower, what they actually mean is that money distribution is lower, because the money supply, as the link shows is rarely diminished. As an economy retracts, funds return to its originator. The wealthiest of our species cannot justify how to spread a trillion dollars around, at the moment, because there fewer and fewer tasks to assign a wage and a human resource. I’ve got a few solutions to recommend in my next book project, Integrationalism: Essays on ownership and distributing value in the 21st century.

No other high-ranking personality on the planet is paying attention to the fact that a scientific proof of danger stays un-refuted for 3 years: That the planet will be shrunken to 2 cm (a black hole) in about 5 years’ time with a probability of 3 percent if the currently running grandiose LHC experiment is not halted immediately.

Since your mind is a unique bridge between the Eastern and the Western world view, you are the only institution on the planet which with authority can demand the necessary scientific safety conference. Even though this particular now and this particular existence is not everything, the sparing of suffering is a holy vocation.

Allow me to convey to you cordial greetings from your friend John S. Bell.

In deep respect

Sincerely yours,

Otto E. Rossler, chaos researcher

Answer: the media. They have – much as in an authoritarian society – voluntarily decided to keep a lid on it all. This is fine as soon as one cannot do anything about it anymore. But this “hurrying-on-ahead obedience” has the consequence that the experiment is presently running with a vengeance, raising the danger by a factor of three in the coming ten weeks. Imagine: 3 percent Armageddon.

After 4 years of waiting in vain, I still hope that someone will find a fault in my deductive chain no matter how unlikely. Therefore I still request nothing but the “scientific safety conference” asked-for by the Cologne Administrative Court on January 27, 2011.

Someone outside the big CERN umbrella reading this near-inaudible cry for help ought to be able to sneak through the media curtain to publicize the content of this Samizdat. Anyone who has children anywhere on the planet. Or do you want to see the terror in their eyes in a few years’ time? We all are the horn of Africa.

The request to have a second look at the European Nuclear Experiment is a most decent request – absolutely nonviolent. To deny it is a manifest crime. Even a court is on my side. Why not join the new Gandhi movement since this is what Gandhi would say today?

Despite some nominations I am just a stupid scientist who found evidence that the currently running LHC experiment in Geneva jeopardizes the planet with a probability of 3 percent, with the largest part of this number still avoidable if the LHC is stopped immediately.

No one in science or the media believes me, only a court in Cologne did but they since also have become nonpersons. This appears to be a unique phenomenon in history since not a single scientist has a counter-proof to offer. All I am and ever was asking for is to double-check: a scientific safety conference. The latter has become the best-heeded taboo of history.

Why is it a sin to see farther? The youngest sailor who can climb the crow’s nest possesses the right and the duty to tell the crew what no one else sees. No one is allowed to shout him down. The same holds true in science: The most reasonable consensus of yesterday is scrap paper in the face of a new finding. My finding bears the name of a young man, Telemach.

The T stands for time, l for length, m for mass and ch for charge (the vowels being for better pronunciation). T,l,m,ch all change by the same factor in gravity, the first two go up, the last two down. Einstein 104 years ago focused on the T but the other three letters are implicit in his later equation. Nevertheless the young man got overlooked for nine decades. Now in the absence of a counter-proof, the specialists are unable to rejoice. Maybe it is because it was not one of them who found the news?

If mass m decreases downstairs, many things are different than thought. The changed l also has a surprising effect already: the famous speed of light c becomes a global constant again, Einstein’s most famous basic discovery. Why are they not smiling?

It is because at the same time, the distance to and from the surface of a black hole, which is known to take up an infinite amount of time to be bridged by light, has now become infinite too. So unfortunately, Hawing’s famous conjecture (the so-called Hawking radiation) evaporates. Hence the LHC cannot even detect the “mini black holes” that it was built to create. And owing to ch, the undetectable minis are in addition virtually frictionless. Only a very slow one will stay inside earth to grow there exponentially by virtue of chaos theory. This is the risk earth is currently taking: that one of the invisible ones takes residence.

All your ship’s boy is asking for is to find a specialist who can prove Telemach wrong. Then I will retract all my warnings. But please, hurry up. For three years in a row, no colleague was strong enough.

But why do they not support the logically required safety conference? This is the quadrillion dollar question which no one can answer. Every person will have to suffer from this by not knowing whether and when CERN’s slow bomb becomes manifest. Every missed day increases the total risk by 3 percent.

The media are not allowed to report because the agenda of the UN Security Council is a secret. Take care, everyone – as long as saying “take care” is still a permitted phrase outside the context of planetary survival.

The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.

Germany refused the scientific safety conference asked-for by a Cologne court seven months ago. Now a country with veto power blocks the initiative put before the United Nations Security Council to impose a double-check before it is too late.

The fact that Germany still refuses to take back having declared the scientist now responsible for the warning insane 15 years ago for having withstood police in his lecture hall for months in a row after revealing a new obedience law without knowing it was a secret, may contribute to the lacking world response. I therefore repeat my request for an apology from the part of Germany and for an answer why she illogically refused the safety conference.

Please, dear citizens of the planet: do not let traditional European obedience kill you and your families and unborn descendants. Black holes are not a joke but the worst danger of history. German-led CERN continues its attempt at producing them even though it knows that its machines cannot detect them.
Why refuse having a look at an un-disproved danger? I count on your love, mothers and fathers of all countries: ask the same question.