Toggle light / dark theme

Quoted: “If you understand the core innovations around the blockchain idea, you’ll realize that the technology concept behind it is similar to that of a database, except that the way you interact with that database is very different.

The blockchain concept represents a paradigm shift in how software engineers will write software applications in the future, and it is one of the key concepts behind the Bitcoin revolution that need to be well understood. In this post, I’d like to explain 5 of these concepts, and how they interrelate to one another in the context of this new computing paradigm that is unravelling in front of us. They are: the blockchain, decentralized consensus, trusted computing, smart contracts and proof of work / stake. This computing paradigm is important, because it is a catalyst for the creation of decentralized applications, a next-step evolution from distributed computing architectural constructs.

Screen Shot 2014-12-23 at 10.30.59 PM

Read the article here > http://startupmanagement.org/2014/12/27/the-blockchain-is-th…verything/

Corporate Reconnoitering?

000000000 blitz 400
ABSOLUTE END.

Authored By Copyright Mr. Andres Agostini

White Swan Book Author (Source of this Article)

http://www.LINKEDIN.com/in/andresagostini

http://www.AMAZON.com/author/agostini

https://www.FACEBOOK.com/heldenceo (Other Publications)

http://LIFEBOAT.com/ex/bios.andres.agostini

http://ThisSUCCESS.wordpress.com

https://www.FACEBOOK.com/agostiniandres

http://www.appearoo.com/aagostini

http://connect.FORWARDMETRICS.com/profile/1649/Andres-Agostini.html

https://www.FACEBOOK.com/amazonauthor

http://FUTURE-OBSERVATORY.blogspot.com

http://ANDRES-AGOSTINI-on.blogspot.com

http://AGOSTINI-SOLVES.blogspot.com

@AndresAgostini

@ThisSuccess

@SciCzar

Kaizen and Six Sigma Vs. White Swan “…Transformative and Integrative Risk Management …”

000  a 24 hours

ABSOLUTE END.

Authored By Copyright Mr. Andres Agostini

White Swan Book Author (Source of this Article)

http://www.LINKEDIN.com/in/andresagostini

http://www.AMAZON.com/author/agostini

http://LIFEBOAT.com/ex/bios.andres.agostini

https://www.FACEBOOK.com/agostiniandres

http://www.appearoo.com/aagostini

http://connect.FORWARDMETRICS.com/profile/1649/Andres-Agostini.html

https://www.FACEBOOK.com/amazonauthor

@AndresAgostini

@ThisSuccess

@SciCzar

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.

Cheers!

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

By Suzanne Jacobs — MIT Technology Review

Last week Google and Novartis announced that they’re teaming up to develop contact lenses that monitor glucose levels and automatically adjust their focus. But these could be just the start of a clever new product category. From cancer detection and drug delivery to reality augmentation and night vision, our eyes offer unique opportunities for both health monitoring and enhancement.

“Now is the time to put a little computer and a lot of miniaturized technologies in the contact lens,” says Franck Leveiller, head of research and development in the Novartis eye care division.

Read more

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

MERE COMPUTER OR MULTI-SENSORY ROBOT ?

Computers are undergoing a profound mutation at the moment. Neuromorphic chips have been designed on the way the human brain works, modelling the massively parallel neurological processeses using artificial neural networks. This will enable computers to process sensory information like vision and audition much more like animals do. Considerable research is currently devoted to create a functional computer simulation of the whole human brain. The Human Brain Project is aiming to achieve this for 2016. Does that mean that computers will finally experience feelings and emotions like us ? Surely if an AI can simulate a whole human brain, then it becomes a sort of virtual human, doesn’t it ? Not quite. Here is why.

There is an important distinction to be made from the onset between an AI residing solely inside a computer with no sensor at all, and an AI that is equipped with a robotic body and sensors. A computer alone would have a range of emotions far more limited as it wouldn’t be able to physically interact with its environment. The more sensory feedback a machine could receive, the wide the range of feelings and emotions it will be able to experience. But, as we will see, there will always be fundamental differences between the type of sensory feedback that a biological body and a machine can receive.

Here is an illustration of how limited an AI is emotionally without a sensory body of its own. In animals, fear, anxiety or phobias are evolutionary defense mechanisms aimed at raising our vigilence in the face of danger. That is because our bodies work with biochemical signals involving hormones and neurostransmitters sent by the brain to prompt a physical action when our senses perceive danger. Computers don’t work that way. Without sensors feeding them information about their environment, computers wouldn’t be able to react emotionally.

Even if a computer could remotely control machines like robots (e.g. through the Internet) that are endowed with sensory perception, the computer itself wouldn’t necessarily care if the robot (a discrete entity) is harmed or destroyed, since it would have no physical consequence on the AI itself. An AI could fear for its own well-being and existence, but how is it supposed to know that it is in danger of being damaged or destroyed ? It would be the same as a person who is blind, deaf and whose somatosensory cortex has been destroyed. Without feeling anything about the outside world, how could it perceive danger ? That problem disappear once the AI is given at least one sense, like a camera to see what is happening around itself. Now if someone comes toward the computer with a big hammer, it will be able to fear for its existence !

WHAT CAN MACHINES FEEL ?

In theory, any neural process can be reproduced digitally in a computer, even though the brain is mostly analog. This is hardly a concern, as Ray Kurzweil explained in his book How to Create a Mind. However it does not always make sense to try to replicate everything a human being feel in a machine.

While sensory feelings like heat, cold or pain could easily be felt from the environment if the machine is equipped with the appropriate sensors, this is not the case for other physiological feelings like thirst, hunger, and sleepiness. These feelings alert us of the state of our body and are normally triggered by hormones such as vasopressin, ghrelin, or melatonin. Since machines do not have a digestive system nor hormones, it would be downright nonsensical to try to emulate such feelings.

Emotions do not arise for no reason. They are either a reaction to an external stimulus, or a spontaneous expression of an internal thought process. For example, we can be happy or joyful because we received a present, got a promotion or won the lottery. These are external causes that trigger the emotions inside our brain. The same emotion can be achieved as the result of an internal thought process. If I manage to find a solution to a complicated mathematical problem, that could make me happy too, even if nobody asked me to solve it and it does not have any concrete application in my life. It is a purely intellectual problem with no external cause, but solving it confers satisfaction. The emotion could be said to have arisen spontaneously from an internalized thought process in the neocortex. In other words, solving the problem in the neocortex causes the emotion in another part of the brain.

An intelligent computer could also prompt some emotions based on its own thought processes, just like the joy or satisfaction experienced by solving a mathematical problem. In fact, as long as it is allowed to communicate with the outside world, there is no major obstacle to computers feeling true emotions of its own like joy, sadness, surprise, disappointment, fear, anger, or resentment, among others. These are all emotions that can be produced by interactions through language (e.g. reading, online chatting) with no need for physiological feedback.

Now let’s think about how and why humans experience a sense of well being and peace of mind, two emotions far more complex than joy or anger. Both occur when our physiological needs are met, when we are well fed, rested, feel safe, don’t feel sick, and are on the right track to pass on our genes and keep our offspring secure. These are compound emotions that require other basic emotions as well as physiological factors. A machine without physiological needs cannot get sick and that does not need to worry about passing on its genes to posterity, and therefore will have no reason to feel that complex emotion of ‘well being’ the way humans do. For a machine well being may exist but in a much more simplified form.

Just like machines cannot reasonably feel hunger because they do not eat, replicating emotions on machines with no biological body, no hormones, and no physiological needs can be tricky. This is the case with social emotions like attachment, sexual emotions like love, and emotions originating from evolutionary mechanisms set in the (epi)genome. This is what we will explore in more detail below.

FEELINGS ROOTED IN THE SENSES AND THE VAGUS NERVE

What really distinguishes intelligent machines from humans and animals is that the former do not have a biological body. This is essentially why they could not experience the same range of feelings and emotions as we do, since many of them inform us about the state of our biological body.

An intelligent robot with sensors could easily see, hear, detect smells, feel an object’s texture, shape and consistency, feel pleasure and pain, heat and cold, and the like. But what about the sense of taste ? Or the effects of alcohol on the mind ? Since machines do not eat, drink and digest, they wouldn’t be able to experience these things. A robot designed to socialize with humans would be unable to understand and share the feelings of gastronomical pleasure or inebriety with humans. They could have a theoretical knowledge of it, but not a first-hand knowledge from an actually felt experience.

But the biggest obstacle to simulating physical feelings in a machine comes from the vagus nerve, which controls such varied things as digestion, ‘gut feelings’, heart rate and sweating. When we are scared or disgusted, we feel it in our guts. When we are in love we feel butterflies in our stomach. That’s because of the way our nervous system is designed. Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.

Strong emotions can affect our microbiome, the trillions of gut bacteria that help us digest food and that secrete 90% of the serotonin and 50% of the dopamine used by our brain. The thousands of species of bacteria living in our intestines can vary quickly based on our diet, but it has been demonstrated that even emotions like stress, anxiety, depression and love can strongly affect the composition of our microbiome. This is very important because of the essential role that gut bacteria play in maintaining our brain functions. The relationship between gut and brain works both ways. The presence or absence of some gut bacteria has been linked to autism, obsessive-compulsive disorder and several other psychological conditions. What we eat actually influence the way the think too, by changing our gut flora, and therefore also the production of neurotransmitters. Even our intuition is linked to the vagus nerve, hence the expression ‘gut feeling’.

Without a digestive system, a vagus nerve and a microbiome, robots would miss a big part of our emotional and psychological experience. Our nutrition and microbiome influence our brain far more than most people suspect. They are one of the reasons why our emotions and behaviour are so variable over time (in addition to maturity; see below).

SICKNESS, FATIGUE, SLEEP AND DREAMS

Another key difference between machines and humans (or animals) is that our emotions and thoughts can be severely affected by our health, physical condition and fatigue. Irritability is often an expression of mental or physical exhaustion caused by a lack of sleep or nutrients, or by a situation that puts excessive stress on mental faculties and increases our need for sleep and nutrients. We could argue that computers may overheat if used too intensively, and may also need to rest. That is not entirely true if the hardware is properly designed with an super-efficient cooling system, and a steady power supply. New types of nanochips may not produce enough heat to have any heating problem at all.

Most importantly machines don’t feel sick. I don’t mean just being weakened by a disease or feeling pain, but actually feeling sick, such as indigestion, nausea (motion sickness, sea sickness), or feeling under the weather before tangible symptoms appear. These aren’t enviable feelings of course, but the point is that machines cannot experience them without a biological body and an immune system.

When tired or sick, not only do we need to rest to recover our mental faculties and stabilize our emotions, we also need to dream. Dreams are used to clear our short-term memory cache (in the hippocampus), to replete neurotransmitters, to consolidate memories (by myelinating synapses during REM sleep), and to let go of the day’s emotions by letting our neurons firing up freely. Dreams also allow a different kind of thinking free of cultural or professional taboos that increase our creativity. This is why we often come up with great ideas or solutions to our problems during our sleep, and notably during the lucid dreaming phase.

Computers cannot dream and wouldn’t need to because they aren’t biological brains with neurostransmitters, stressed out neurons and synapses that need to get myelinated. Without dreams, an AI would nevertheless loose an essential component of feeling like a biological human.

EMOTIONS ROOTED IN SEXUALITY

Being in love is an emotion that brings a male and a female individual (save for some exceptions) of the same species together in order to reproduce and raise one’s offspring until they grow up. Sexual love is caused by hormomes, but is not merely the product of hormonal changes in our brain. It involves changes in the biochemistry of our whole body and can even lead to important physiological effects (e.g. on morphology) and long-term behavioural changes. Clearly sexual love is not ‘just an emotion’ and is not purely a neurological process either. Replicating the neurological expression of love in an AI would simulate the whole emotion of love, but only one of its facets.

Apart from the issue of reproducing the physiological expresion of love in a machine, there is also the question of causation. There is a huge difference between an artificially implanted/simulated emotion and one that is capable of arising by itself from environmental causes. People can fall in love for a number of reasons, such as physical attraction and mental attraction (shared interests, values, tastes, etc.), but one of the most important in the animal world is genetic compatibility with the prospective mate. Individuals who possess very different immune systems (HLA genes), for instance, tend to be more strongly attracted to each other and feel more ‘chemistry’. We could imagine that a robot with a sense of beauty and values could appreciate the looks and morals of another robot or a human being and even feel attracted (platonically). Yet a machine couldn’t experience the ‘chemistry’ of sexual love because it lacks hormones, genes and other biochemical markers required for sexual reproduction. In other words, robots could have friends but not lovers, and that make sense.

A substantial part of the range of human emotions and behaviours is anchored in sexuality. Jealousy is another good example. Jealousy is intricatedly linked to love. It is the fear of losing one’s loved one to a sexual rival. It is an innate emotion whose only purpose is to maximize our chances of passing our genes through sexual reproduction by warding off competitors. Why would a machine, which does not need to reproduce sexually, need to feel that ?

One could wonder what difference it makes whether a robot can feel love or not. They don’t need to reproduce sexually, so who cares ? If we need intelligent robots to work with humans in society, for example by helping to take care of the young, the sick and the elderly, they could still function as social individuals without feeling sexual love, wouldn’t they ? In fact you may not want a humanoid robot to become a sexual predator, especially if working with kids ! Not so fast. Without a basic human emotion like love, an AI simply cannot think, plan, prioritize and behave the same way as humans do. Their way of thinking, planning and prioritizing would rely on completely different motivations. For example, young human adults spend considerable time and energy searching for a suitable mate in order to reproduce.

A robot endowed with an AI of equal or greater than human intelligence, lacking the need for sexual reproduction would behave, plan and prioritize its existence very differently than humans. That is not necessarily a bad thing, for a lot of conflicts in human society are caused by sex. But it also means that it could become harder for humans to predict the behaviour and motivation of autonomous robots, which could be a problem once they become more intelligent than us in a few decades. The bottom line is that by lacking just one essential human emotion (let alone many), intelligent robots could have very divergent behaviours, priorities and morals from humans. It could be different in a good way, but we can’t know that for sure at present since they haven’t been built yet.

TEMPERAMENT AND SOCIABILITY

Humans are social animals. They typically, though not always (e.g. some types of autism), seek to belong to a group, make friends, share feelings and experiences with others, gossip, seek approval or respect from others, and so on. Interestingly, a person’s sociability depends on a variety of factors not found in machines, including gender, age, level of confidence, health, well being, genetic predispositions, and hormonal variations.

We could program an AI to mimick a certain type of human sociability, but it wouldn’t naturally evolve over time with experience and environmental factors (food, heat, diseases, endocrine disruptors, microbiome). Knowledge can be learned but not spontaneous reactions to environmental factors.

Humans tend to be more sociable when the weather is hot and sunny, when they drink alcohol and when they are in good health. A machine has no need to react like that, unless once again we intentionally program it to resemble humans. But even then it couldn’t feel everything we feel as it doesn’t eat, doesn’t have gut bacteria, doesn’t get sick, and doesn’t have sex.

MATERNAL WARMTH AND FEELING OF SAFETY IN MAMMALS

Humans, like all mammals, have an innate need for maternal warmth in childhood. An experiment was conducted with newborn mice taken away from their biological mother. The mice were placed in a cage with two dummy mothers. One of them was warm, fluffy and cosy, but did not have milk. The other one was hard, cold and uncosy but provided milk. The baby mice consistently chose the cosy one, demonstrating that the need for comfort and safety trumps nutrition in infant mammals. Likewise, humans deprived of maternal (or paternal) warmth and care as babies almost always experience psychological problems growing up.

In addition to childhood care, humans also need the feeling of safety and cosiness provided by the shelter of one’s home throughout life. Not all animals are like that. Even as hunter-gatherers or pastoralist nomads, all Homo sapiens need a shelter, be it a tent, a hut or a cave.

How could we expect that kind of reaction and behaviour in a machine that does not need to grow from babyhood to adulthood, cannot know what it is to have parents or siblings, nor need to feel reassured by maternal warmth, and do not have a biological compulsion to seek a shelter ? Without those feelings, it is extremely doubtful that a machine could ever truly understand and empathize completely with humans.

These limitations mean that it may be useless to try to create intelligent, sentient and self-aware robots that truly think, feel and behave like humans. Reproducing our intellect, language, and senses (except taste) are the easy part. Then comes consciousness, which is harder but still feasible. But since our emotions and feelings are so deeply rooted in our biological body and its interaction with its environment, the only way to reproduce them would be to reproduce a biological body for the AI. In other words, we are not talking about a creating a machine anymore, but genetically engineering a new life being, or using neural implants for existing humans.

MACHINES DON’T MATURE

The way human experience emotions evolves dramatically from birth to adulthood. Children are typically hyperactive and excitable and are prone to making rash decisions on impulse. They cry easily and have difficulties containing and controlling their emotions and feelings. As we mature, we learn more or les successfully to master our emotions. Actually controlling one’s emotions gets easier over time because with age the number of neurons in the brain decreases and emotions get blunter and vital impulses weaker.

The expression of one’s emotions is heavily regulated by culture and taboos. That’s why speakers of Romance languages will generally express their feelings and affection more freely than, say, Japanese or Finnish people. Would intelligent robots also follow one specific human culture, or create a culture on their own ?

Sex hormones also influence the way we feel and express emotions. Male testosterone makes people less prone to emotional display, more rational and cold, but also more aggressive. Female estrogens increase empathy, affection and maternal instincts of protection and care. A good example of the role of biology on emotions is the way women’s hormonal cycles (and the resulting menstruations) affect their emotions. One of the reasons that children process emotions differently than adults is that have lower sex hormomes. As people age, hormonal levels decrease (not just sex hormones), making us more mellow.

Machines don’t mature emotionally, do not go through puberty, do not have hormonal cycles, nor undergo hormonal change based on their age, diet and environment. Artificial intelligence could learn from experience and mature intellectually, but not mature emotionally like a child becoming an adult. This is a vital difference that shouldn’t be underestimated. Program an AI to have the emotional maturity of a 5-year old and it will never grow up. Children (especially boys) cannot really understand the reason for their parents’ anxiety toward them until they grow up and have children of their own, because they lack the maturity and sexual hormones associated with parenthood.

We could always run a software emulating changes in AI maturity over time, but they would not be the result of experiences and interactions with the environment. It may not be useful to create robots that mature like us, but the argument debated here is whether machines could ever feel exactly like us or not. This argument is not purely rhetorical. Some transhumanists wish to be able one day to upload their mind onto a computer and transfer our consciouness (which may not be possible for a number of reasons). Assuming that it becomes possible, what if a child or teenager decides to upload his or her mind and lead a new robotic existence ? One obvious problem is that this person would never fulfill his/her potential for emotional maturity.

The loss of our biological body would also deprive us of our capacity to experience feelings and emotions bound to our physiology. We may be able to keep those already stored in our memory, but we may never dream, enjoy food, or fall in love again.

SUMMARY & CONCLUSION

What emotions could machines experience ?

Even though many human emotions are beyond the range of machines due to their non-biological nature, some emotions could very well be felt by an artificial intelligence. These include, among others:

  • Joy, satisfaction, contentment
  • Disappointment, sadness
  • Surprise
  • Fear, anger, resentment
  • Friendship
  • Appreciation for beauty, art, values, morals, etc.

What emotions and feelings would machines not be able to experience ?

The following emotions and feelings could not be wholly or faithfully experienced by an AI, even with a sensing robotic body, beyond mere implanted simulation.

  • Hunger, thirst, drunkenness, gastronomical enjoyment
  • Various feelings of sickness, such as nausea, indigestion, motion sickness, sea sickness, etc.
  • Sexual love, attachment, jealousy
  • Maternal/paternal instincts towards one’s own offspring
  • Fatigue, sleepiness, irritability
  • Dreams and associated creativity

In addition, machine emotions would run up against the following issues that would prevent them to feel and experience the world truly like humans.

  • Machines wouldn’t mature emotionally with age.
  • Machines don’t grow up and don’t go through puberty to pass from a relatively asexual childhood stage to a sexual adult stage
  • Machines cannot fall in love (+ associated emotions, behaviours and motivations) as they aren’t sexual beings
  • Being asexual, machines are genderless and therefore lack associated behaviour and emotions caused by male and female hormones.
  • Machines wouldn’t experience gut feelings (fear, love, intuition).
  • Machine emotions, intellect, psychology and sociability couldn’t vary with nutrition and microbiome, hormonal changes, or environmental factors like the weather.

It is not completely impossible to bypass these obstacles, but that would require to create a humanoid machine that not only possess human-like intellectual faculties, but also an artificial body that can eat and digest and with a digestive system connected to the central microprocessor in the same way as our vagus nerve is connected to our brain. That robot would also need a gender and a capacity to have sex and feel attracted to other humanoid robots or humans based on a predefined programming that serves as an alternative to a biological genome to create a sense of ‘sexual chemistry’ when matched with an individual with a compatible “genome”. It would necessitate artificial hormones to regulate its hunger, thirst, sexual appetite, homeostasis, and so on.

Although we lack the technology and in-depth knowledge of the human body to consider such an ambitious project any time soon, it could eventually become possible one day. One could wonder whether such a magnificent machine could still be called a machine, or simply an artificially made life being. I personally don’t think it should be called a machine at that point.

———

This article was originally published on Life 2.0.

- @ClubOfINFOTransEvolution: The Coming Age of Human Deconstruction (2014) is an alarmist book by Daniel Estulin, a commentator on the secretive Bilderberg Group who is well-liked by many – in particular on conspiracy theorist forums. Essentially, this should be regarded as conspiracy theory material. My refutations of it are too many to cram into this review, so I will mainly focus on what the book itself says.

Daniel Estulin connects disparate events and sources to depict an elaborate conspiracy. The main starting claim of the book is a link between the 2005 Bilderberg Conference and the 2006 document Strategic Trends 2007–2036 prepared by the British government (p. 1–12). Estulin claims that the latter report’s predictions betray “Promethean” plans that represent “designs by the Bilderberg Group”.
The book makes the allegation that the economic pressure on the world today “is being done on purpose, absolutely on purpose. The reason is because our current corporate empire knows that “progress of humanity” means their imminent demise”. The “powers-that-be” destroy nation-states to maintain power, and “this is by design” (p. 13). Estulin decries international money flows and globalization, and promotes “physical economy” instead. To make a long story short, he describes the apparatus of globalization, integration, etc. as a clash between the nation-state and global oligarchy and frames this as a classic battle between good and evil respectively (p. 13–35). “The ideas of a nation-state republic and progress” are intrinsically connected (p. 34), Estulin argues, putting forward his preference for the old Jacobin ideological script of the Nineteenth Century rather than modern discourses on integration and communication.
In his preference for the nation-state, Estulin attacks the WTO’s record on free trade, and makes criticisms that are provisionally valid. However, he confuses the tendency for weaker nations to be exploited through free trade with a conspiracy against the nation-state. The WTO’s commitment to what it calls free trade, a commitment to “One World, One Market”, reflects “anti-nation-state intent”, Estulin argues (p. 37–38).
Although they attach too much agency to global “elites”, Estulin’s description of the way international trade on agriculture has been manipulated to disadvantage poor nations and advantage rich nations (p. 38–49) agrees with already powerful sociology theories of “free trade imperialism” and the larger humanitarian message of the alter-globalization movement. Estulin quotes William Engdahl’s The Seeds of Destruction at length to argue against the destructive local impacts of global agribusiness (p. 47–53).
Estulin interprets the spread of the pharmaceuticals industry as evidence of the elite seeking a docile and controlled population, “massive drugging of the population”, “controlled chaos”, and even goes as far as to say that GMOs will be poisoning everyone on the planet and finally kill 3 billion people indiscriminately (p. 63–68). More puzzlingly than what has already been specified, Estulin blames the Club of Rome thesis itself (which predicted the depletion of resources leading to economic collapse) for making an enemy of humanity and submitting a plan for no less than the deliberate depopulation of the Earth (p. 17–20).
Synthetic biology is not spared from criticism by Estulin. He immediately labels it as “founded on the ambition that one day it will be possible to design and manufacture a human being” (p. 69). For the record, nowhere in the field of synthetic biology has anyone actually advocated manufacturing human beings, and nor does such an ambition coincide with the conspiracy theory about depopulating the Earth. Estulin further confuses science with pseudoscience, stating “genetics, as defined by the Rockefeller Foundation, would constitute the new face of eugenics” (p. 71). “Ultimately,” Estulin writes, “this is about taking control of nature, redesigning it and rebuilding it to serve the whims of the controlling elite” (p. 72).
In further arguments against the perceived “elite”, Estulin demonizes space exploration, saying “the elite are planning, at least, a limited exodus from the Planet Earth. Why? What do they know that we don’t? Nuclear wars? Nanowars? Bacteriological wars?” (p. 123) Chapter 4, although titled “space exploration”, is dedicated to explaining the deadly potential of future security and defense technologies when used by regimes against their own people (p. 115–156).
Then, we get to transhumanism (only in the last chapter.) The chapter alleges that the US government thought up a transhumanist agenda in 2001 as a strategic military contingency – in particular the Russian 2045 Movement. According to Estulin, the transhumanist conspiracy in its present form comes from a conference, “The Age of Transitions” (p. 159–161). Using little more than the few links between political or business figures and transhumanism as evidence, he alleges that transhumanism is “steered by the elite” and that “we, the people, have not been invited” (p. 161–162).
The movie Avatar (2010) by James Cameron (mistakenly named as David Cameron in Estulin’s book), is connected by Estulin with the 2045 movement’s enthusiasm for humans becoming “avatars” by means of being uploaded as digital beings (p. 162–164). Further, the movie Prometheus (2012) by Ridley Scott reflects the “future plans of the elite” according to Estulin (p. 165–170). However, he does not analyze either movie, and fails to note that Peter Weyland (the “elite”) in the movie is actually a vile character and his search for life-extension is a product of his greed and vanity (this is not exactly a glamorization of the search for life extension). If anything, Prometheus joins a long tradition of literature and film that encourages people not to trust transhumanism and life extension and to fear where such movements could lead.
Exaggerated connections and resemblances between disparate conferences, such as the US government and Russian longevity enthusiasts, are put forward as evidence of a conspiracy (p. 170). Then, we get to Estulin’s real complaint against transhumanism:

“Many people have trouble understanding what the true transhumanism movement is about, and why it’s so evil. After all, it’s just about improving our quality of life, right? Or is transhumanism about social control on a gigantic scale?” (p. 172–173)

Estulin also asserts:

“Transhumanism fills people’s hopes and minds with dreams of becoming superhuman, but the fact of the matter is that the true goal is the removal of that pesky, human free will itself.” (p. 186)

Estulin (and Engdahl’s) belief in a eugenic “depopulation” agenda (p. 57), as hideous as the crimes of Nazism, in Monsanto’s work is an example of a conspiracy theory appealing to irrational fears. Both of these writers are confusing corporate greed and monopolistic priorities with actual wicked and genocidal intent, and assigning motives that do not exist. They are confusing structural evils in the world system with actions by evil men gathered in dark rooms. Estulin also conveniently misses out the fact that the indiscriminate poisoning of all life by changing the DNA of every living thing would also threaten the conspirators and their own families. I guess we must assume that the conspirators are also a suicide cult, of the same breed as Jim Jones’ “People’s Temple”.
At the end of the book’s tirade about synthetic biology being a ticket for the elite to control all life, Estulin reverts back to a question very prominent in mainstream fora: “can we trust the major corporations with the right thing?” (p. 74). The answer from almost everyone would be Nobut not for any of the reasons Estulin has put forward. We can’t trust the major corporations, because their only interest is endless profit in the near term, and such profit is maximized by their ability to monopolize and detain real progress. Monsanto and other agri-giants are only vainly forestalling and trying to contain the real technium for their own greed – there is nothing radical about them.
One thing I find ever entertaining about conspiracy theories is the tendency to get their ideas from Hollywood movies, while at the same time refuting the movies as an example of brainwashing and propaganda. Apparently, despite all their warnings to people not to be influenced by media, conspiracy theorists are incapable of noticing how impressionable and easily pressured they themselves are.
The book even attacks Darwinian evolution and natural selection, seeing a sinister agenda in them (p. 179–180), which adds to the book’s already deep anti-science message. He connects the theory of evolution with the destructive idea of social Darwinism, and with transhumanism in turn (p. 190–191). The elite plan to “bring society down to the level of beast” by encouraging such social Darwinism, Estulin alleges (p. 211–219).
Bizarre speculated connections between Malthusian theories, Darwin, the British Empire, eugenics and ultimately transhumanism (p. 174–178) do not take note of the fact that transhumanists and technoprogressives are the one camp in the world most opposed to Malthusianism. Technoprogressives are the camp with the most faith in the idea that the entire world can be fed and sustained. No-one has more faith in the infinite resources of humanity and the ability to meet everyone’s needs than the technoprogressives.
Perhaps reflecting the book’s confusion, Chapter 1 is dedicated to asserting that the “elite” will reduce everyone to a primitive and chaotic setting, whereas Chapter 2 onwards alleges that the plan is a high-tech dystopia. These two polar opposite conspiracies do not coincide in any way, as do the paradoxical claims that transhuman technologies are never going to be seen by the world’s poor, yet are also going to be forced on the whole of humanity.
The coverage of transhumanism and understanding of it in this book is not positive (to put it politely). It fails to take account of transhumanism’s real basis as a movement exploring emerging trends to change humanity for the better. Instead, it simply exaggerates marginal influences by futurism, popular science and technology enthusiasm on governments and business elites as representing a global conspiracy.
A more informative theory about the relationship of the “elite” towards transhumanism would instead explore the habit of ignorant opposition by Neoconservatives, warmongers, and the mainstream media towards international peace, development, science, education, web freedom, and ultimately transhumanism.

By Harry J. BenthamMore articles by Harry J. Bentham

Originally published on 20 May 2014 at h+ Magazine

We have nothing to fear from exponential technological change, which will deliver humanity from statism and oligarchy. Send us your email address to get more ClubOfINFO articles delivered for free.

transcendence
I recently saw the film Transcendence with a close friend. If you can get beyond Johnny Depp’s siliconised mugging of Marlon Brando and Rebecca Hall’s waddling through corridors of quantum computers, Transcendence provides much to think about. Even though Christopher Nolan of Inception fame was involved in the film’s production, the pyrotechnics are relatively subdued – at least by today’s standards. While this fact alone seems to have disappointed some viewers, it nevertheless enables you to focus on the dialogue and plot. The film is never boring, even though nothing about it is particularly brilliant. However, the film stays with you, and that’s a good sign. Mark Kermode at the Guardian was one of the few reviewers who did the film justice.

The main character, played by Depp, is ‘Will Caster’ (aka Ray Kurzweil, but perhaps also an allusion to Hans Castorp in Thomas Mann’s The Magic Mountain). Caster is an artificial intelligence researcher based at Berkeley who, with his wife Evelyn Caster (played by Hall), are trying to devise an algorithm capable of integrating all of earth’s knowledge to solve all of its its problems. (Caster calls this ‘transcendence’ but admits in the film that he means ‘singularity’.) They are part of a network of researchers doing similar things. Although British actors like Hall and the key colleague Paul Bettany (sporting a strange Euro-English accent) are main players in this film, the film itself appears to transpire entirely within the borders of the United States. This is a bit curious, since a running assumption of the film is that if you suspect a malevolent consciousness uploaded to the internet, then you should shut the whole thing down. But in this film at least, ‘the whole thing’ is limited to American cyberspace.

Before turning to two more general issues concerning the film, which I believe may have led both critics and viewers to leave unsatisfied, let me draw attention to a couple of nice touches. First, the leader of the ‘Revolutionary Independence from Technology’ (RIFT), whose actions propel the film’s plot, explains that she used to be an advanced AI researcher who defected upon witnessing the endless screams of a Rhesus monkey while its entire brain was being digitally uploaded. Once I suspended my disbelief in the occurrence of such an event, I appreciate it as a clever plot device for showing how one might quickly convert from being radically pro- to anti-AI, perhaps presaging future real-world targets for animal rights activists. Second, I liked the way in which quantum computing was highlighted and represented in the film. Again, what we see is entirely speculative, yet it highlights the promise that one day it may be possible to read nature as pure information that can be assembled according to need to produce what one wants, thereby rendering our nanotechnology capacities virtually limitless. 3D printing may be seen as a toy version of this dream.

Now on to the two more general issues, which viewers might find as faults, but I think are better treated as what the Greeks called aporias (i.e. open questions):

(1) I think this film is best understood as taking place in an alternative future projected from when, say, Ray Kurzweil first proposed ‘the age of spiritual machines’ (i.e. 1999). This is not the future as projected in, say, Spielberg’s Minority Report, in which the world has become so ‘Jobs-ified’, that everything is touch screen-based. In fact, the one moment where a screen is very openly touched proves inconclusive (i.e. when, just after the upload, Evelyn impulsively responds to Will being on the other side of the interface). This is still a world very much governed by keyboards (hence the symbolic opening shot where a keyboard is used as a doorstop in the cyber-meltdown world). Even the World Wide Web doesn’t seem to have the prominence one might expect in a film where computer screens are featured so heavily. Why is this the case? Perhaps because the script had been kicking around for a while (which is true). This may also explain why in Evelyn’s pep talk to funders includes a line about Einstein saying something ‘nearly fifty years ago’. (Einstein died in 1955.) Or, for that matter, why the FBI agent (played by Irish actor Cillian Murphy) looks like something out of a 1970s TV detective series, the on-site military commander looks like George C. Scott and the great quantum computing mecca is located in a town that looks frozen in the 1950s. Perhaps we are seeing here the dawn of ‘steampunk’ for the late 20th century.

(2) The film contains heavy Christian motifs, mainly surrounding Paul Bettany’s character, Max Waters, who turns out to be the only survivor of the core research team involved in uploading consciousness. He wears a cross around his neck, which pops up at several points in the film. Moreover, once Max is abducted by RIFT, he learns that his writings querying whether digital uploading enhances or obliterates humanity have been unwittingly inspirational. Max and Will can be contrasted in terms of where they stand in relation to the classic Faustian bargain: Max refuses what Will accepts (quite explicitly, in response to the person who turns out to be his assassin). At stake is whether our biblically privileged status as creatures entitles us to take the next step to outright deification, which in this case means merging with the source of all knowledge on the internet. To underscore the biblical dimension of dilemma, toward the end of the film, Max confronts Evelyn (Eve?) with the realization that she was the one who nudged Will toward this crisis. Yet, the film’s overall verdict on his Faustian fall is decidedly mixed. Once uploaded, Will does no permanent damage, despite the viewer’s expectations. On the contrary, like Jesus, he manages to cure the ill, and even when battling with the amassed powers of the US government and RIFT, he ends up not killing anyone. However, the viewer is led to think that Will 2.0 may have overstepped the line when he revealed his ability to monitor Evelyn’s thoughts. So the real transgression appears to lie in the violation of privacy. (The Snowdenistas would be pleased!) But the film leaves the future quite open, as what the viewer sees in the opening and final scenes looks more like the result of an extended blackout (and hints are given that some places have already begun the restore their ICT infrastructure) than anything resembling irreversible damage to life as we know it. One can read this as either a warning shot to greater damage ahead if we go down the ‘transcendence’ route, or that such a route might be worth pursuing if we get manage to sort out the ‘people issues’. Given that Max ends the film by eulogising Will and Evelyn’s attempts to benefit humanity, I read the film as cautiously optimistic about the prospects for ‘transcendence’, where the film’s plot is taken as offering a simulated trial run.

My own final judgement is that this film would be very good for classroom use to raise the entire range of issues surrounding what I have called ‘Humanity 2.0’.

White Swan Update by Andres Agostini at https://lifeboat.com/blog/2014/04/white-swan

018

This House’s “Bioconcrete” Turns Every Drop Of Rain Into Drinking Water http://www.fastcoexist.com/3030070/this-house-uses-bioconcre…king-water

Google Skunk Works May Tackle Energy and Agriculture http://www.21stcentech.com/google-skunk-works-tackle-agriculture/

Semi-synthetic bug extends ‘life’s alphabet’ http://www.bbc.com/news/science-environment-27329583

But What Would the End of Humanity Mean for Me? http://www.theatlantic.com/health/archive/2014/05/but-what-d…me/361931/

Molecular high-speed origami: Researchers elucidate important mechanism of protein folding http://phys.org/news/2014-05-molecular-high-speed-origam…rtant.html

Only 2% Of People Can Actually Multitask — This Test Will Tell You If You Are One Of Them http://www.businessinsider.com/multitasker-test-tells-you-if…z31I9DitM6

As AI Advances into ‘Deep Learning,’ are Robot Butlers on the Horizon? http://www.livescience.com/45482-robot-butlers-deep-learning.html

Scientists create new lifeform with added DNA base pair http://www.kurzweilai.net/scientists-create-new-lifeform-wit…-base-pair

GaitTrack app makes cellphone a medical monitor for heart and lung patients http://www.kurzweilai.net/gaittrack-app-makes-cellphone-a-me…g-patients

The White Swan Treatise at https://lifeboat.com/blog/2014/04/white-swan