Toggle light / dark theme

In the era of information overload, it is difficult to find a study that presents in a nutshell the global situation as a whole along with potential future perspectives. This is exactly what the 2013–14 State of the Future, a new report by The Millennium Project, tries to do in a comprehensive and readable way. Launched at the Woodrow Wilson International Center for Scholars in Washington, DC, this report about the future of humanity is a distillation of the work of over 2,000 international experts contributing through the 50 Nodes of The Millennium Project around the world, from Argentina to Azerbaijan, from China to Colombia, from South Africa to South Korea, from the UK to the USA. It is “an informative publication that gives invaluable insights into the future for the United Nations, its Member States, and civil society” said UN Secretary General Ban Ki-moon, and “the most influential annual report on what we know about the future of humanity” notes Paul Werbos from the National Science Foundation.

Half of the report covers the 15 Global Challenges that were defined by The Millennium Project in 1998, after an international Delphi expert survey, and were used as additional input for the Millennium Development Goals in 2000. Since then, The Millennium Project has been assessing the yearly evolution of these challenges with quantitative indicators and comprehensive qualitative analysis. But why are these global challenges so important? Well, as my friend Peter Diamandis, CEO of the X Prize Foundation and co-founder of Singularity University, likes to say: “the greatest challenges are also the greatest opportunities”. Indeed, with every challenge there is a huge opportunity to improve the human condition, as well as create new businesses, jobs and economic activity.

Let’s consider quickly these 15 global challenges, not in any specific order, since they are all equally important and fundamental to the long-term development and survival of humanity:

1. How can sustainable development be achieved for all while addressing global climate change?

2. How can everyone have sufficient clean water without conflict?

3. How can population growth and resources be brought into balance?

4. How can genuine democracy emerge from authoritarian regimes?

5. How can decision-making be enhanced by integrating improved global foresight during unprecedented accelerating change?

6. How can the global convergence of information and communications technologies work for everyone?

7. How can ethical market economies be encouraged to help reduce the gap between rich and poor?

8. How can the threat of new and reemerging diseases and immune micro-organisms be reduced?

9. How can education make humanity more intelligent, knowledgeable, and wise enough to address its global challenges?

10. How can shared values and new security strategies reduce ethnic conflicts, terrorism, and the use of weapons of mass destruction?

11. How can the changing status of women help improve the human condition?

12. How can transnational organized crime networks be stopped from becoming more powerful and sophisticated global enterprises?

13. How can growing energy demands be met safely and efficiently?

14. How can scientific and technological breakthroughs be accelerated to improve the human condition?

15. How can ethical considerations become more routinely incorporated into global decisions?

If there are global challenges, let’s solve them, and let’s make money in the process, while saving humanity along the way. This is part of the Silicon Valley mentality, where every problem is also an incredible opportunity for new ideas and solutions. Think of Google and its technology “moonshots”, or the new X Prizes, among several such initiatives around the world. Exponential technological advances are giving us incredible tools to solve many of these challenges, if not all of them.

For example, let’s consider the energy challenge (global challenge 13 in the The Millennium Project list) and the critical condition of 1.3 billion people who still have no access to electricity around the world. Without any doubts, this is a major global challenge, but it is also a major global opportunity. The energy industry is worth about eight trillion dollars every year, and it will be radically changed in the coming years, moving from fossil fuels to renewables, from centralized to distributed systems. These are totally disruptive changes, similar to what has happened in telecommunications during the transition from fixed-line telephones to mobile telephones. For the first time in history, today is possible to think that in less than 20 years, every human being in the planet will have access to electricity. The energy industry is just starting a similar technological disruption to the one with cell phones reaching every corner of the planet in the last 20 years.

The opportunities for both developed and developing countries are enormous. My friend Vivek Wadhwa, an Indian-American technology entrepreneur and academic, has written extensively about the incredible opportunities that technology will bring to solve the global grand challenges of humanity. He not only talks about the positive prospects for the USA, but also around the world, including his original India. Wadhwa believes that “technology can unleash India’s full potential” through smartphones, Internet transparency, health care revolution, cheap tablets for education, new water sanitation, agricultural automation, and harnessing the impressive talents of the young. Such ideas are not just valid for India, but all over the developing world, and even in some parts of the developed world.

We are truly living through incredible times, and thanks to technology, we will probably see more changes in the next 20 years than in the previous 200 years. Now is really the time to “make poverty history” as the United Nations and other international organizations try through the global campaign to eradicate poverty over the next two decades. In fact, even Bill Gates wrote in his 2014 annual letter that eliminating poverty is finally within our grasp by 2035. This time is for real, and those global challenges are also the greatest opportunities for humanity.

José Cordeiro, MBA, PhD (www.cordeiro.org)

The 2013–14 State of the Future is available at http://millennium-project.org/millennium/201314SOF.html and realtime updates of this work are in the Global Futures Intelligence system at GFIS.

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

MERE COMPUTER OR MULTI-SENSORY ROBOT ?

Computers are undergoing a profound mutation at the moment. Neuromorphic chips have been designed on the way the human brain works, modelling the massively parallel neurological processeses using artificial neural networks. This will enable computers to process sensory information like vision and audition much more like animals do. Considerable research is currently devoted to create a functional computer simulation of the whole human brain. The Human Brain Project is aiming to achieve this for 2016. Does that mean that computers will finally experience feelings and emotions like us ? Surely if an AI can simulate a whole human brain, then it becomes a sort of virtual human, doesn’t it ? Not quite. Here is why.

There is an important distinction to be made from the onset between an AI residing solely inside a computer with no sensor at all, and an AI that is equipped with a robotic body and sensors. A computer alone would have a range of emotions far more limited as it wouldn’t be able to physically interact with its environment. The more sensory feedback a machine could receive, the wide the range of feelings and emotions it will be able to experience. But, as we will see, there will always be fundamental differences between the type of sensory feedback that a biological body and a machine can receive.

Here is an illustration of how limited an AI is emotionally without a sensory body of its own. In animals, fear, anxiety or phobias are evolutionary defense mechanisms aimed at raising our vigilence in the face of danger. That is because our bodies work with biochemical signals involving hormones and neurostransmitters sent by the brain to prompt a physical action when our senses perceive danger. Computers don’t work that way. Without sensors feeding them information about their environment, computers wouldn’t be able to react emotionally.

Even if a computer could remotely control machines like robots (e.g. through the Internet) that are endowed with sensory perception, the computer itself wouldn’t necessarily care if the robot (a discrete entity) is harmed or destroyed, since it would have no physical consequence on the AI itself. An AI could fear for its own well-being and existence, but how is it supposed to know that it is in danger of being damaged or destroyed ? It would be the same as a person who is blind, deaf and whose somatosensory cortex has been destroyed. Without feeling anything about the outside world, how could it perceive danger ? That problem disappear once the AI is given at least one sense, like a camera to see what is happening around itself. Now if someone comes toward the computer with a big hammer, it will be able to fear for its existence !

WHAT CAN MACHINES FEEL ?

In theory, any neural process can be reproduced digitally in a computer, even though the brain is mostly analog. This is hardly a concern, as Ray Kurzweil explained in his book How to Create a Mind. However it does not always make sense to try to replicate everything a human being feel in a machine.

While sensory feelings like heat, cold or pain could easily be felt from the environment if the machine is equipped with the appropriate sensors, this is not the case for other physiological feelings like thirst, hunger, and sleepiness. These feelings alert us of the state of our body and are normally triggered by hormones such as vasopressin, ghrelin, or melatonin. Since machines do not have a digestive system nor hormones, it would be downright nonsensical to try to emulate such feelings.

Emotions do not arise for no reason. They are either a reaction to an external stimulus, or a spontaneous expression of an internal thought process. For example, we can be happy or joyful because we received a present, got a promotion or won the lottery. These are external causes that trigger the emotions inside our brain. The same emotion can be achieved as the result of an internal thought process. If I manage to find a solution to a complicated mathematical problem, that could make me happy too, even if nobody asked me to solve it and it does not have any concrete application in my life. It is a purely intellectual problem with no external cause, but solving it confers satisfaction. The emotion could be said to have arisen spontaneously from an internalized thought process in the neocortex. In other words, solving the problem in the neocortex causes the emotion in another part of the brain.

An intelligent computer could also prompt some emotions based on its own thought processes, just like the joy or satisfaction experienced by solving a mathematical problem. In fact, as long as it is allowed to communicate with the outside world, there is no major obstacle to computers feeling true emotions of its own like joy, sadness, surprise, disappointment, fear, anger, or resentment, among others. These are all emotions that can be produced by interactions through language (e.g. reading, online chatting) with no need for physiological feedback.

Now let’s think about how and why humans experience a sense of well being and peace of mind, two emotions far more complex than joy or anger. Both occur when our physiological needs are met, when we are well fed, rested, feel safe, don’t feel sick, and are on the right track to pass on our genes and keep our offspring secure. These are compound emotions that require other basic emotions as well as physiological factors. A machine without physiological needs cannot get sick and that does not need to worry about passing on its genes to posterity, and therefore will have no reason to feel that complex emotion of ‘well being’ the way humans do. For a machine well being may exist but in a much more simplified form.

Just like machines cannot reasonably feel hunger because they do not eat, replicating emotions on machines with no biological body, no hormones, and no physiological needs can be tricky. This is the case with social emotions like attachment, sexual emotions like love, and emotions originating from evolutionary mechanisms set in the (epi)genome. This is what we will explore in more detail below.

FEELINGS ROOTED IN THE SENSES AND THE VAGUS NERVE

What really distinguishes intelligent machines from humans and animals is that the former do not have a biological body. This is essentially why they could not experience the same range of feelings and emotions as we do, since many of them inform us about the state of our biological body.

An intelligent robot with sensors could easily see, hear, detect smells, feel an object’s texture, shape and consistency, feel pleasure and pain, heat and cold, and the like. But what about the sense of taste ? Or the effects of alcohol on the mind ? Since machines do not eat, drink and digest, they wouldn’t be able to experience these things. A robot designed to socialize with humans would be unable to understand and share the feelings of gastronomical pleasure or inebriety with humans. They could have a theoretical knowledge of it, but not a first-hand knowledge from an actually felt experience.

But the biggest obstacle to simulating physical feelings in a machine comes from the vagus nerve, which controls such varied things as digestion, ‘gut feelings’, heart rate and sweating. When we are scared or disgusted, we feel it in our guts. When we are in love we feel butterflies in our stomach. That’s because of the way our nervous system is designed. Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.

Strong emotions can affect our microbiome, the trillions of gut bacteria that help us digest food and that secrete 90% of the serotonin and 50% of the dopamine used by our brain. The thousands of species of bacteria living in our intestines can vary quickly based on our diet, but it has been demonstrated that even emotions like stress, anxiety, depression and love can strongly affect the composition of our microbiome. This is very important because of the essential role that gut bacteria play in maintaining our brain functions. The relationship between gut and brain works both ways. The presence or absence of some gut bacteria has been linked to autism, obsessive-compulsive disorder and several other psychological conditions. What we eat actually influence the way the think too, by changing our gut flora, and therefore also the production of neurotransmitters. Even our intuition is linked to the vagus nerve, hence the expression ‘gut feeling’.

Without a digestive system, a vagus nerve and a microbiome, robots would miss a big part of our emotional and psychological experience. Our nutrition and microbiome influence our brain far more than most people suspect. They are one of the reasons why our emotions and behaviour are so variable over time (in addition to maturity; see below).

SICKNESS, FATIGUE, SLEEP AND DREAMS

Another key difference between machines and humans (or animals) is that our emotions and thoughts can be severely affected by our health, physical condition and fatigue. Irritability is often an expression of mental or physical exhaustion caused by a lack of sleep or nutrients, or by a situation that puts excessive stress on mental faculties and increases our need for sleep and nutrients. We could argue that computers may overheat if used too intensively, and may also need to rest. That is not entirely true if the hardware is properly designed with an super-efficient cooling system, and a steady power supply. New types of nanochips may not produce enough heat to have any heating problem at all.

Most importantly machines don’t feel sick. I don’t mean just being weakened by a disease or feeling pain, but actually feeling sick, such as indigestion, nausea (motion sickness, sea sickness), or feeling under the weather before tangible symptoms appear. These aren’t enviable feelings of course, but the point is that machines cannot experience them without a biological body and an immune system.

When tired or sick, not only do we need to rest to recover our mental faculties and stabilize our emotions, we also need to dream. Dreams are used to clear our short-term memory cache (in the hippocampus), to replete neurotransmitters, to consolidate memories (by myelinating synapses during REM sleep), and to let go of the day’s emotions by letting our neurons firing up freely. Dreams also allow a different kind of thinking free of cultural or professional taboos that increase our creativity. This is why we often come up with great ideas or solutions to our problems during our sleep, and notably during the lucid dreaming phase.

Computers cannot dream and wouldn’t need to because they aren’t biological brains with neurostransmitters, stressed out neurons and synapses that need to get myelinated. Without dreams, an AI would nevertheless loose an essential component of feeling like a biological human.

EMOTIONS ROOTED IN SEXUALITY

Being in love is an emotion that brings a male and a female individual (save for some exceptions) of the same species together in order to reproduce and raise one’s offspring until they grow up. Sexual love is caused by hormomes, but is not merely the product of hormonal changes in our brain. It involves changes in the biochemistry of our whole body and can even lead to important physiological effects (e.g. on morphology) and long-term behavioural changes. Clearly sexual love is not ‘just an emotion’ and is not purely a neurological process either. Replicating the neurological expression of love in an AI would simulate the whole emotion of love, but only one of its facets.

Apart from the issue of reproducing the physiological expresion of love in a machine, there is also the question of causation. There is a huge difference between an artificially implanted/simulated emotion and one that is capable of arising by itself from environmental causes. People can fall in love for a number of reasons, such as physical attraction and mental attraction (shared interests, values, tastes, etc.), but one of the most important in the animal world is genetic compatibility with the prospective mate. Individuals who possess very different immune systems (HLA genes), for instance, tend to be more strongly attracted to each other and feel more ‘chemistry’. We could imagine that a robot with a sense of beauty and values could appreciate the looks and morals of another robot or a human being and even feel attracted (platonically). Yet a machine couldn’t experience the ‘chemistry’ of sexual love because it lacks hormones, genes and other biochemical markers required for sexual reproduction. In other words, robots could have friends but not lovers, and that make sense.

A substantial part of the range of human emotions and behaviours is anchored in sexuality. Jealousy is another good example. Jealousy is intricatedly linked to love. It is the fear of losing one’s loved one to a sexual rival. It is an innate emotion whose only purpose is to maximize our chances of passing our genes through sexual reproduction by warding off competitors. Why would a machine, which does not need to reproduce sexually, need to feel that ?

One could wonder what difference it makes whether a robot can feel love or not. They don’t need to reproduce sexually, so who cares ? If we need intelligent robots to work with humans in society, for example by helping to take care of the young, the sick and the elderly, they could still function as social individuals without feeling sexual love, wouldn’t they ? In fact you may not want a humanoid robot to become a sexual predator, especially if working with kids ! Not so fast. Without a basic human emotion like love, an AI simply cannot think, plan, prioritize and behave the same way as humans do. Their way of thinking, planning and prioritizing would rely on completely different motivations. For example, young human adults spend considerable time and energy searching for a suitable mate in order to reproduce.

A robot endowed with an AI of equal or greater than human intelligence, lacking the need for sexual reproduction would behave, plan and prioritize its existence very differently than humans. That is not necessarily a bad thing, for a lot of conflicts in human society are caused by sex. But it also means that it could become harder for humans to predict the behaviour and motivation of autonomous robots, which could be a problem once they become more intelligent than us in a few decades. The bottom line is that by lacking just one essential human emotion (let alone many), intelligent robots could have very divergent behaviours, priorities and morals from humans. It could be different in a good way, but we can’t know that for sure at present since they haven’t been built yet.

TEMPERAMENT AND SOCIABILITY

Humans are social animals. They typically, though not always (e.g. some types of autism), seek to belong to a group, make friends, share feelings and experiences with others, gossip, seek approval or respect from others, and so on. Interestingly, a person’s sociability depends on a variety of factors not found in machines, including gender, age, level of confidence, health, well being, genetic predispositions, and hormonal variations.

We could program an AI to mimick a certain type of human sociability, but it wouldn’t naturally evolve over time with experience and environmental factors (food, heat, diseases, endocrine disruptors, microbiome). Knowledge can be learned but not spontaneous reactions to environmental factors.

Humans tend to be more sociable when the weather is hot and sunny, when they drink alcohol and when they are in good health. A machine has no need to react like that, unless once again we intentionally program it to resemble humans. But even then it couldn’t feel everything we feel as it doesn’t eat, doesn’t have gut bacteria, doesn’t get sick, and doesn’t have sex.

MATERNAL WARMTH AND FEELING OF SAFETY IN MAMMALS

Humans, like all mammals, have an innate need for maternal warmth in childhood. An experiment was conducted with newborn mice taken away from their biological mother. The mice were placed in a cage with two dummy mothers. One of them was warm, fluffy and cosy, but did not have milk. The other one was hard, cold and uncosy but provided milk. The baby mice consistently chose the cosy one, demonstrating that the need for comfort and safety trumps nutrition in infant mammals. Likewise, humans deprived of maternal (or paternal) warmth and care as babies almost always experience psychological problems growing up.

In addition to childhood care, humans also need the feeling of safety and cosiness provided by the shelter of one’s home throughout life. Not all animals are like that. Even as hunter-gatherers or pastoralist nomads, all Homo sapiens need a shelter, be it a tent, a hut or a cave.

How could we expect that kind of reaction and behaviour in a machine that does not need to grow from babyhood to adulthood, cannot know what it is to have parents or siblings, nor need to feel reassured by maternal warmth, and do not have a biological compulsion to seek a shelter ? Without those feelings, it is extremely doubtful that a machine could ever truly understand and empathize completely with humans.

These limitations mean that it may be useless to try to create intelligent, sentient and self-aware robots that truly think, feel and behave like humans. Reproducing our intellect, language, and senses (except taste) are the easy part. Then comes consciousness, which is harder but still feasible. But since our emotions and feelings are so deeply rooted in our biological body and its interaction with its environment, the only way to reproduce them would be to reproduce a biological body for the AI. In other words, we are not talking about a creating a machine anymore, but genetically engineering a new life being, or using neural implants for existing humans.

MACHINES DON’T MATURE

The way human experience emotions evolves dramatically from birth to adulthood. Children are typically hyperactive and excitable and are prone to making rash decisions on impulse. They cry easily and have difficulties containing and controlling their emotions and feelings. As we mature, we learn more or les successfully to master our emotions. Actually controlling one’s emotions gets easier over time because with age the number of neurons in the brain decreases and emotions get blunter and vital impulses weaker.

The expression of one’s emotions is heavily regulated by culture and taboos. That’s why speakers of Romance languages will generally express their feelings and affection more freely than, say, Japanese or Finnish people. Would intelligent robots also follow one specific human culture, or create a culture on their own ?

Sex hormones also influence the way we feel and express emotions. Male testosterone makes people less prone to emotional display, more rational and cold, but also more aggressive. Female estrogens increase empathy, affection and maternal instincts of protection and care. A good example of the role of biology on emotions is the way women’s hormonal cycles (and the resulting menstruations) affect their emotions. One of the reasons that children process emotions differently than adults is that have lower sex hormomes. As people age, hormonal levels decrease (not just sex hormones), making us more mellow.

Machines don’t mature emotionally, do not go through puberty, do not have hormonal cycles, nor undergo hormonal change based on their age, diet and environment. Artificial intelligence could learn from experience and mature intellectually, but not mature emotionally like a child becoming an adult. This is a vital difference that shouldn’t be underestimated. Program an AI to have the emotional maturity of a 5-year old and it will never grow up. Children (especially boys) cannot really understand the reason for their parents’ anxiety toward them until they grow up and have children of their own, because they lack the maturity and sexual hormones associated with parenthood.

We could always run a software emulating changes in AI maturity over time, but they would not be the result of experiences and interactions with the environment. It may not be useful to create robots that mature like us, but the argument debated here is whether machines could ever feel exactly like us or not. This argument is not purely rhetorical. Some transhumanists wish to be able one day to upload their mind onto a computer and transfer our consciouness (which may not be possible for a number of reasons). Assuming that it becomes possible, what if a child or teenager decides to upload his or her mind and lead a new robotic existence ? One obvious problem is that this person would never fulfill his/her potential for emotional maturity.

The loss of our biological body would also deprive us of our capacity to experience feelings and emotions bound to our physiology. We may be able to keep those already stored in our memory, but we may never dream, enjoy food, or fall in love again.

SUMMARY & CONCLUSION

What emotions could machines experience ?

Even though many human emotions are beyond the range of machines due to their non-biological nature, some emotions could very well be felt by an artificial intelligence. These include, among others:

  • Joy, satisfaction, contentment
  • Disappointment, sadness
  • Surprise
  • Fear, anger, resentment
  • Friendship
  • Appreciation for beauty, art, values, morals, etc.

What emotions and feelings would machines not be able to experience ?

The following emotions and feelings could not be wholly or faithfully experienced by an AI, even with a sensing robotic body, beyond mere implanted simulation.

  • Hunger, thirst, drunkenness, gastronomical enjoyment
  • Various feelings of sickness, such as nausea, indigestion, motion sickness, sea sickness, etc.
  • Sexual love, attachment, jealousy
  • Maternal/paternal instincts towards one’s own offspring
  • Fatigue, sleepiness, irritability
  • Dreams and associated creativity

In addition, machine emotions would run up against the following issues that would prevent them to feel and experience the world truly like humans.

  • Machines wouldn’t mature emotionally with age.
  • Machines don’t grow up and don’t go through puberty to pass from a relatively asexual childhood stage to a sexual adult stage
  • Machines cannot fall in love (+ associated emotions, behaviours and motivations) as they aren’t sexual beings
  • Being asexual, machines are genderless and therefore lack associated behaviour and emotions caused by male and female hormones.
  • Machines wouldn’t experience gut feelings (fear, love, intuition).
  • Machine emotions, intellect, psychology and sociability couldn’t vary with nutrition and microbiome, hormonal changes, or environmental factors like the weather.

It is not completely impossible to bypass these obstacles, but that would require to create a humanoid machine that not only possess human-like intellectual faculties, but also an artificial body that can eat and digest and with a digestive system connected to the central microprocessor in the same way as our vagus nerve is connected to our brain. That robot would also need a gender and a capacity to have sex and feel attracted to other humanoid robots or humans based on a predefined programming that serves as an alternative to a biological genome to create a sense of ‘sexual chemistry’ when matched with an individual with a compatible “genome”. It would necessitate artificial hormones to regulate its hunger, thirst, sexual appetite, homeostasis, and so on.

Although we lack the technology and in-depth knowledge of the human body to consider such an ambitious project any time soon, it could eventually become possible one day. One could wonder whether such a magnificent machine could still be called a machine, or simply an artificially made life being. I personally don’t think it should be called a machine at that point.

———

This article was originally published on Life 2.0.

#Exclusive: @HJBentham @ClubOfINFO responds to @Hetero_Sapien @IEET
After the reprint at the ClubOfINFO webzine of Franco Cortese’s excellent IEET (Institute for Ethics and Emerging Technologies) article about how advanced technology clashes with the Second Amendment of the US Constitution, I am interested enough that I have decided to put together this response. Changes in technology do eventually force changes in the law, and some laws ultimately have to be scrapped. However there is an argument to be made that the Second Amendment’s deterrent against tyranny should not be dismissed too easily.
Franco points out that the Second Amendment’s “most prominent justification” is that citizens require a form of self-defense against a potentially corrupt government. In such a case, they may need to take back the state by force through a “citizen militia”.

Technology and “stateness”

The argument given by Franco against the idea of citizens engaging their government in battle leads to a conclusion that “technological growth has made the Second Amendment redundant”. Arms in the Eighteenth Century were “roughly equal” for the citizenry and the military. According to Franco’s article, “in 1791, the only thing that distinguished the defensive or offensive capability of military from citizenry was quantity. Now it’s quality.”
I believe the above point about the state monopoly on force going from being based on quantity to quality can be disputed. The analysis from Franco seems to be that the norms of warfare and the internal effectiveness of state power are set by the level technology available to the state. Although there is of course a strong technological element involved in these manifestations of state power, it is more accurate to say “stateness” – which military power is only the international reflection of – is due to a combination of having more legitimacy, resources and organization. The effectiveness of this kind of “stateness”, including the ability of the most powerful states to overcome challenges of internecine warfare, has not changed very decisively since the Nineteenth Century.
In fact, stateness is said by many analysts to have declined worldwide since the fall of the Berlin Wall. Since that event and the subsequent dissolution of the USSR, the number of states facing internal crisis seems to have only risen, which suggests stateness is being weakened globally due to many complex pressures. Advanced technology is itself even credited with eroding stateness, as transport and the Internet only give citizens ever more abilities to get around, provoke, rebel and ultimately erode the strength and legitimacy of the state. In most arenas of social change, states face unprecedented challenges from their own citizens because of the unexpected changes in advanced technology that have taken place over the last few decades. Concerning the future of this trend, Franco aptly anticipates in his article that “post-scarcity” technologies would make things even more uncomfortable for the state, pushing it to rely on secrecy and suppression of knowledge to avoid proliferation of devastating weapons.
Much of this commentary on the loss of stateness may seem irrelevant to the right to bear arms in the United States, but it is relevant for reasons that will become clear in this article. We cannot say that the US government has a true monopoly on force due to its technology, and that the potential of a citizen uprising is gone. We have seen too many other “modern” states such as Yugoslavia, Somalia, Lebanon, Libya, Syria and Mali quickly deteriorate into full scale civil war just because groups of determined citizens took up light weapons (many of those rebels have far less skill and technology at their disposal than the average US gun owner).

Internecine warfare in the United States

From what we have seen of civil war in other countries, we cannot know that simple rifles and handguns really are a useless path of resistance against a modern state tyranny, just because the tyrants will have more lethal options such as cluster bombs and nerve gas. Even the most crudely armed insurrectionists are capable of overthrowing their governments, if they are determined and numerous enough. Having a lightly armed population from the outset, like the US population, only makes it more likely that such a war against tyranny would be ubiquitous and likely to succeed swiftly from the outset.
If we do take the unlikely position of supposing that the United States will degenerate into a true tyranny in the Aristotelian definition, then US citizens certainly need their right to bear arms. More than that, their path of armed resistance using those light weapons could still realistically win. If their cause was just, we can suppose that they would be battling in self-defense against a tyrannical regime that has plummeting legitimacy, or is buying time for contingents of the military to break off and join the rebellion. In such a situation, the sheer number of citizens taking up arms would do more than just demoralize government troops and lead to indecision among them.
The fact of a generally well-armed population would, if they took up arms against their regime, guarantee the existence of a widespread insurgency to such an extent that the rulers would face many years of internecine resistance and live under the constant specter of assassination. Add the internal economic devastation caused by citizens committing acts of sabotage and civil disobedience, foreign sanctions by other states, and even international aid to the insurgents by external actors, and the tyrants could be ousted even by the most lightly armed militia units.
Explaining the imbalance that has prevailed between the military might of states and the internal ability of citizens to resist their ruling regimes with arms, Franco notes that the “overwhelming majority of new technological advances are able to be leveraged by the military before they trickle down to the average citizen through industry.” This is certainly true. However, the summation that resistance is futile would not take into account the treacherous opportunities that exist in every internecine war.
When the state projects force internally, it prefers to call that “law enforcement” for as long as it remains in control of the situation. Even if the violence gets more widespread and becomes civil war, the state denies such a fact until the very last moment. Even then, it prefers to minimize the damage on its own territory, because the damage would ultimately have to be repaired and paid for by the state itself. Even in a civil war situation, the technology brought to bear against citizens by the government would never be as heavy or destructive as the kind of equipment brought to bear against foreign states or non-state actors. This is for the simple reason that the state, in a civil war, has to try to avoid obliterating its own constituents and infrastructure for political reasons. If it is caught committing such a desperate and disproportionate act, it will only undermine itself and give a propaganda coup to its lightly equipped opponents by committing a heavy-handed atrocity.
The imbalance of the superior technology of the United States government in contrast to the basic handguns and rifles of its citizenry is real, but it would have zero significance if a real internecine war took place in the United States. The deadliest weapons in the arsenal of the United States, such as nuclear or biological weapons, would never be used to confront internecine threats, so they are not relevant enough to enter the debate on the Second Amendment.
The concept of taking back government via a citizen militia is not about defeating a whole nation in the conventional sense through raw military strength, but rather about a multifaceted political struggle in which the nation is able to confront and defeat the ruling regime via some form of internecine combat. The US would tend to prefer handling militant and “terrorist” adversaries on its own territory with the bare minimum of heavy equipment and ordnance at all times. Given this, the real technological contest would only be between opposing marksmen and their rifles (any advanced firearms would soon be seized by guerrillas and used back against the state). No ridiculously unbalanced battle with tanks, nukes and generals on one side and “simple folks” with shotguns on the other side would take place. In most civil wars, the use of tanks and warplanes (never mind nukes) only tends to make matters worse for the ruling government by hitting bystanders and further alienating the people on the ground. The US military leadership should know this better than anyone else, having condemned regime after regime for making that same mistake of heavy-handed escalation.
Anti-tyranny insurgency using only light (and easily hidden) armaments is as viable in 2014 as it was in the Eighteenth Century, and has proven sufficient to delegitimize and ultimately remove brutal regimes from power. Any sufficiently unpopular regime can be delegitimized and removed from power by the armed resistance of lightly-equipped militia forces.
Franco’s conclusion that the US should neither extend the Second Amendment to cover giving everyone access to ridiculously devastating weapons, nor scrap the Second Amendment altogether, is wise and relevant to helping US society make some difficult decisions. Law (and by extension stateness) is “uncertain in the face of technologies’ upward growth.” States that want to remain popular should try to be as adaptive as possible to new (and old) technologies and ideas, and not be swayed by any single narrow-minded idea or program for society. If the American people distrust their system of government enough to keep their right to bear arms, for fear of tyranny, then the Second Amendment ought to remain.

By Harry J. BenthamMore articles by Harry J. Bentham

This article originally appeared at the techno-politics magazine, ClubOfINFO

By Teppei Kasai and Yoshiyasu Shida — Reuters
SoftBank Corp. unveils human-like robots named 'pepper' at the company's news conference in Urayasu, east of Tokyo June 5, 2014. REUTERS-Issei Kato
(Reuters) — Japan’s SoftBank Corp said on Thursday it will start selling human-like robots for personal use by February, expanding into a sector seen key to addressing labour shortages in one of the world’s fastest ageing societies.

The robots, which the mobile phone and Internet conglomerate envisions serving as baby-sitters, nurses, emergency medical workers or even party companions, will sell for 198,000 yen ($1,900) and are capable of learning and expressing emotions, Softbank CEO Masayoshi Son told a news conference.

A prototype will be deployed this week, serving customers at SoftBank mobile phone stores in Japan, he added. The sleek, waist-high robot, named Pepper, accompanied Son to the briefing, speaking to reporters in a high-pitched, boyish voice.

“People describe others as being robots because they have no emotions, no heart. For the first time in human history, we’re giving a robot a heart, emotions,” Son said.

Read More

By Dante D’Orazio — The Verge
http://img.scoop.it/NZ28c_qIxoRZ9YOrblsGBzl72eJkfbmt4t8yenImKBXEejxNn4ZJNZ2ss5Ku7Cxt

Eugene Goostman seems like a typical 13-year-old Ukrainian boy — at least, that’s what a third of judges at a Turing Test competition this Saturday thought. Goostman says that he likes hamburgers and candy and that his father is a gynecologist, but it’s all a lie. This boy is a program created by computer engineers led by Russian Vladimir Veselov and Ukrainian Eugene Demchenko.

That a third of judges were convinced that Goostman was a human is significant — at least 30 percent of judges must be swayed for a computer to pass the famous Turing Test. The test, created by legendary computer scientist Alan Turing in 1950, was designed to answer the question “Can machines think?” and is a well-known staple of artificial intelligence studies.

Read More

Lifeboat Foundation Worldwide Ambassador White Swan Update and Published Amazon Author by Andres Agostini at www.amazon.com/author/agostini

new-1

In the black holes’ pull. Astronomers discover that magnetic fields in the vicinity of supermassive black holes can equal the force of gravity http://www.mpg.de/8256277/magnetic-fields_supermassive-black-holes

In the black holes’ pull Astronomers discover that magnetic fields in the vicinity of supermassive black holes http://www.linkedin.com/today/post/article/20140609021814&#4…of-gravity

At the heart of the antimatter mystery http://www.mpg.de/8242689/proton_magnetic_moment

“At the heart of the antimatter mystery” by @SciCzar on @LinkedIn http://www.linkedin.com/today/post/article/20140609022248&#4…er-mystery

Change of perspective in the electronic landscape http://www.mpg.de/8242301/bismuth_energy-distribution

“The conventional electronic model for metals is not valid for bismuth” by @SciCzar on @LinkedIn http://www.linkedin.com/today/post/article/20140609022555&#4…or-bismuth

Environmental change leaves its footprint in the epigenome http://www.mpg.de/8238267/environment-epigenome

Environmental change leaves its footprint in the epigenome http://www.linkedin.com/today/post/article/20140609022848&#4…-wild-mice

Extortioners are only temporarily successful http://www.mpg.de/8234305/extortioners-social-conduct

A game theory experiment demonstrates that people only let themselves be extorted to http://www.linkedin.com/today/post/article/20140609023239&#4…d-extent-t

The pirate in the microbe. Bacteria do not move entirely randomly as they scout the surface of host cells http://www.mpg.de/8233992/motion-bacteria-tug-of-war

The pirate in the microbe Bacteria do not move entirely randomly as they scout the surface of host cells http://www.linkedin.com/today/post/article/20140609023539&#4…host-cells

Outgrowing emotional egocentricity. Max Planck researchers discover a region of the brain that enables children to overcome emotional self-centeredness as they mature

http://www.mpg.de/8229113/children-emotional-egocentricity

Gamma rays from the core of active galaxies. Observations show that outbursts occur in the nuclear regions of active galaxies http://www.mpg.de/8225053/blazar-gamma-rays

Gamma rays from the core of active galaxies www.linkedin.com/today/post/article/20140609024028–34427457-gamma-rays-from-the-core-of-active-galaxies-observations-show-that-outbursts-occur-in-the-nuclear-regions-of-active-galaxies

Rosetta’s target comet is becoming active. The scientific imaging system OSIRIS on board ESA’s Rosetta spacecraft witnesses the awakening of the mission’s target comet http://www.mpg.de/8212484/churyumov_gerasimenko_dust-coma

“Rosetta’s target comet is becoming active The scientific imaging system OSIRIS ” by @SciCzar on @LinkedIn http://www.linkedin.com/today/post/article/20140609024612&#4…-s-rosetta

Birth of a star in double-quick time. Scientists observe the nurseries of massive stars in our galaxy http://www.mpg.de/8206852/star-birth-milkyway

Birth of a star in double-quick time Scientists observe the nurseries of massive stars in our galaxy http://www.linkedin.com/today/post/article/20140609024910&#4…our-galaxy

A new quantum memory on the horizon http://www.mpg.de/8202685/quantum-ion-crystal

A new quantum memory on the horizon http://www.linkedin.com/today/post/article/20140609025137&#4…he-horizon

Endocrine disruptors impair human sperm function http://www.mpg.de/8201201/chemicals-fertility

Endocrine disruptors impair human sperm function Ultraviolet filters, preservatives, and …http://www.linkedin.com/today/post/article/20140609025350&#4…y-problems

There is nothing about the Science of Complexity that you can teach the Germans. Max Planck Institute for Dynamics of Complex Technical Systems http://www.mpg.de/154455/dyn_komplex_techn_systeme

Nanosats are go! Small satellites: iny satellites are changing the space business http://www.linkedin.com/today/post/article/20140607205338&#4…e-business

Harvard researchers find switch that causes mature liver cells to revert back to stem cell-like state http://www.linkedin.com/today/post/article/20140607210737&#4…like-state

These Are the Mining Robots That Will Colonize the Solar System http://io9.com/these-are-the-mining-robots-that-will-coloniz…aleeNewitz

“Astronomers find new type of planet: The “Mega-Earth”” by @SciCzar on @LinkedIn http://www.linkedin.com/today/post/article/20140608005255&#4…mega-earth

Heartbleed Redux: Another Gaping Wound in Web Encryption Uncovered http://www.wired.com/2014/06/heartbleed-redux-another-gaping…uncovered/

Twitter’s in Trouble. Here’s How It Can Avoid Becoming the Next AOL http://www.wired.com/2014/06/is-twitter-morphing-into-the-next-aol/

Obama: Japanese Robots ‘Were a Little Scary’ http://www.roboticstrends.com/service_healthcare/article/oba…ttle_scary

Sixth sense device allows gestures to control technology http://www.engineering.com/DesignerEdge/DesignerEdgeArticles…ology.aspx

I am preparing a 1-day The Gravity Modification Workshop (more details here) and expect to conduct this workshop in the August-September 2014 time frame. I would like to gauge interest so if you are interested in attending pleased complete this short QuickSurveys survey https://www.quicksurveys.com/s/p6K3J informing me of your interest. This survey ends June 22 2014.

Workshop details are as follows:

Title: The Gravity Modification Workshop

Presenter: Benjamin T Solomon

Duration: 1 day

Location: Denver, CO, USA

Materials Provided: DVD of PowerPoint slides & Excel models used to discover the new physics.

Meals: Previous evening Dinner & Networking, with workshop day Breakfast & Lunch will be provided.

PC Requirements: Windows 7 or later notebooks/laptops. Note, Android, iPad & MS tablets not suitable as these won’t execute the Excel Add In

Fee: Approximately $1,000. Workshop fee to be finalized closer to date.

Brochure Link: http://www.iseti.us/pdf/(00)TheGravityModificationWorkshop(2…45;07).pdf

Lifeboat Foundation Worldwide Ambassador White Swan Update and Published Amazon Author by Andres Agostini at www.amazon.com/author/agostini

006

The Black Bible to Extreme Omniscient Womb-to-Tomb Risk Management as per the White Swan Treatise Synthesis!: The White Swan’s Beyond Eureka and Sputnik Moments: How To Fundamentally Cope With Risks
ASIN: B00KTCVFVW http://www.amazon.com/dp/B00KTCVFVW?tag=lifeboatfound-20

Improved supercapacitors for better batteries, electric vehicles http://www.kurzweilai.net/improved-supercapacitors-for-super…c-vehicles

Wires that can store energy like batteries http://www.kurzweilai.net/wires-that-can-store-energy-like-batteries

“Improved supercapacitors for better batteries, electric vehicles” by @SciCzar on @LinkedIn http://www.linkedin.com/today/post/article/20140606210602&#4…c-vehicles

Wires that can store energy like batteries http://www.linkedin.com/today/post/article/20140606210759&#4…-batteries

NASA’s OPALS Beams Video from Space https://www.youtube.com/watch?v=1efsA8PQmDA&feature=share

N.I.H. Seeks $4.5 Billion to Try to Crack How Brains Function http://www.linkedin.com/today/post/article/20140606211355&#4…s-function

Ideal Invisibility Cloaks for Visible Light in Diffusive Media / Simple Setup Produces Surprising Results http://www.kit.edu/kit/english/pi_2014_15233.php

Space Robot Performs Surgery on Itself http://www.linkedin.com/today/post/article/20140606212724&#4…-on-itself

MIT Develops Wearable Robotic Arms Robotic arms could open a door while .… http://www.linkedin.com/today/post/article/20140606212916&#4…-real-arms

Can virtual reality therapy help alleviate chronic pain? http://www.linkedin.com/today/post/article/20140606215441&#4…ronic-pain

Earth Has a New Class of Rocks – Plastiglomerates http://www.21stcentech.com/earth-class-rocks-plastiglomerates/

This ‘connected car’ comes with gesture control and Wi-Fi — and it knows if you’re drunk http://venturebeat.com/2014/06/04/this-connected-car-comes-w…ure-drunk/

The Hidden Winners Behind Apple Inc’s HomeKit http://www.linkedin.com/today/post/article/20140606215142&#4…-s-homekit

The Chief Of Microsoft Research On Big Ideas, Failure, And Its New Skunkworks Group http://www.linkedin.com/today/post/article/20140607004133&#4…orks-group

The Chief Of Microsoft Research On Big Ideas, Failure, And Its New Skunkworks Group www.fastcoexist.com/3030164/the-chief-of-microsoft-research-…orks-group

How to be agile with your big data http://ht.ly/xv8wb

Data science vs the hunch: What happens when the figures contradict your gut instinct? www.zdnet.com/data-science-vs-the-hunch-what-happens-when-th…000030289/

How to be agile with your big data Agile methodology brings flexibility to the EDW and .…. http://www.linkedin.com/today/post/article/20140607004703&#4…th-existin

Data science vs the hunch: What happens when the figures contradict your gut instinct? http://www.linkedin.com/today/post/article/20140607004831&#4…t-instinct

Mathematicians Urge Colleagues To Refuse To Work For The NSA http://www.linkedin.com/today/post/article/20140607005210&#4…or-the-nsa

SEO And Other Web Marketing Techniques: Tools Or Tricks? http://www.linkedin.com/today/post/article/20140607005404&#4…-or-tricks

FORBES IS EXTREMELY WRONG: Five Reasons China Won’t Be A Big Threat To America’s Global Power http://www.linkedin.com/today/post/article/20140607010051&#4…obal-power