Toggle light / dark theme

Meet Zoltan, the presidential candidate who drives a coffin

Excited to have a full feature on the BBC homepage on transhumanism and my growing presidential campaign. Transhumanist Party, speech at the World Bank, Immortality Bus, and universal basic income issues discussed:


Not many politicians running for the White House promise to end death. But not many politicians are Zoltan Istvan. Tim Maughan meets a man travelling America in a giant coffin-shaped bus to make his point.

Read more

Finland prepares universal basic income experiment

Pack a heavy coat, folks, we’re going to Finland. The Finnish Social Insurance Institution, also known as Kela, has begun work on a proposal that would guarantee a basic income to every citizen of the small Nordic nation. This system of a universal state-facilitated payment delivered to every Finnish person would transform the state’s welfare system and potentially provide a blueprint for other countries looking to build a different kind of economy.

Read more

Humanity on a Budget, or the Value-Added of Being ‘Human’

This piece is dedicated to Stefan Stern, who picked up on – and ran with – a remark I made at this year’s Brain Bar Budapest, concerning the need for a ‘value-added’ account of being ‘human’ in a world in which there are many drivers towards replacing human labour with ever smarter technologies.

In what follows, I assume that ‘human’ can no longer be taken for granted as something that adds value to being-in-the-world. The value needs to be earned, it can’t be just inherited. For example, according to animal rights activists, ‘value-added’ claims to brand ‘humanity’ amount to an unjustified privileging of the human life-form, whereas artificial intelligence enthusiasts argue that computers will soon exceed humans at the (‘rational’) tasks that we have historically invoked to create distance from animals. I shall be more concerned with the latter threat, as it comes from a more recognizable form of ‘economistic’ logic.

Economics makes an interesting but subtle distinction between ‘price’ and ‘cost’. Price is what you pay upfront through mutual agreement to the person selling you something. In contrast, cost consists in the resources that you forfeit by virtue of possessing the thing. Of course, the cost of something includes its price, but typically much more – and much of it experienced only once you’ve come into possession. Thus, we say ‘hidden cost’ but not ‘hidden price’. The difference between price and cost is perhaps most vivid when considering large life-defining purchases, such as a house or a car. In these cases, any hidden costs are presumably offset by ‘benefits’, the things that you originally wanted — or at least approve after the fact — that follow from possession.

Now, think about the difference between saying, ‘Humanity comes at a price’ and ‘Humanity comes at a cost’. The first phrase suggests what you need to pay your master to acquire freedom, while the second suggests what you need to suffer as you exercise your freedom. The first position has you standing outside the category of ‘human’ but wishing to get in – say, as a prospective resident of a gated community. The second position already identifies you as ‘human’ but perhaps without having fully realized what you had bargained for. The philosophical movement of Existentialism was launched in the mid-20th century by playing with the irony implied in the idea of ‘human emancipation’ – the ease with which the Hell we wish to leave (and hence pay the price) morphs into the Hell we agree to enter (and hence suffer the cost). Thus, our humanity reduces to the leap out of the frying pan of slavery and into the fire of freedom.

In the 21st century, the difference between the price and cost of humanity is being reinvented in a new key, mainly in response to developments – real and anticipated – in artificial intelligence. Today ‘humanity’ is increasingly a boutique item, a ‘value-added’ to products and services which would be otherwise rendered, if not by actual machines then by humans trying to match machine-based performance standards. Here optimists see ‘efficiency gains’ and pessimists ‘alienated labour’. In either case, ‘humanity comes at a price’ refers to the relative scarcity of what in the past would have been called ‘craftsmanship’. As for ‘humanity comes at a cost’, this alludes to the difficulty of continuing to maintain the relevant markers of the ‘human’, given both changes to humans themselves and improvements in the mechanical reproduction of those changes.

Two prospects are in the offing for the value-added of being human: either (1) to be human is to be the original with which no copy can ever be confused, or (2) to be human is to be the fugitive who is always already planning its escape as other beings catch up. In a religious vein, we might speak of these two prospects as constituting an ‘apophatic anthropology’, that is, a sense of the ‘human’ the biggest threat to which is that it might be nailed down. This image was originally invoked in medieval Abrahamic theology to characterize the unbounded nature of divine being: God as the namer who cannot be named.

But in a more secular vein, we can envisage on the horizon two legal regimes, which would allow for the routine demonstration of the ‘value added’ of being human. In the case of (1), the definition of ‘human’ might come to be reduced to intellectual property-style priority disputes, whereby value accrues simply by virtue of showing that one is the originator of something of already proven value. In the case of (2), the ‘human’ might come to define a competitive field in which people routinely try to do something that exceeds the performance standards of non-human entities – and added value attaches to that achievement.

Either – or some combination – of these legal regimes might work to the satisfaction of those fated to live under them. However, what is long gone is any idea that there is an intrinsic ‘value-added’ to being human. Whatever added value there is, it will need to be fought for tooth and nail.

Artificial Intelligence Is A Big Part Of Your Life, Just Don’t Buy The Hollywood Hype

Ask just about anyone on the street to describe artificial intelligence and odds are, they’ll describe something resembling the futuristic science fiction robot they’ve seen in movies and television shows. However, according to Mathematician, Linguist and Artificial Intelligence Researcher Dr. András Kornai, artificial intelligence is a reality right now, and its impact can be seen every day.

“I’d say 35 percent of the total commerce taking place on Wall Street (right now) is driven by algorithms and it’s no longer driven by humans,” Kornai said. “This is not science fiction. (Artificial intelligence) is with us today.”

What we’ve seen so far in the application of algorithm-based artificial intelligence in the financial sector is just the tip of the iceberg, Kornai said. In fact, you don’t even have to own stock to be affected by it.

“I have designed algorithms that will (determine) your creditworthiness, meaning your creditworthiness is now determined by an algorithm,” he said. “We have substituted human-decision making capabilities in favor of better algorithms to pursue this, and we have given up a huge area of human competence, and money is just one aspect of it.”

Kornai points to advances in algorithm-based medical diagnostics, autonomous cars and military technology as some other areas where artificial intelligence is already at work and poised for further growth. While that growth is presented as a good thing, he believes the subtle infiltration of AI has many people missing the larger picture.

“We are seeing an uptick in medical decisions by algorithms and I’m not opposed to this, as it’s important to have the best possible information in the medical world. And in 10 or 15 years autonomous vehicles will be a big deal,” Kornai said. “In military technology, drones are generally human controlled, but there is intense research toward autonomous ground or air vehicles that will work even if someone is trying to cut off their communication. This is not the future, this is here now.”

According to Kornai, since algorithms are based on statistics, the problem with algorithm-based advances in those areas is the level of error that is inherent to the system. That built-in error may not be able to cause bodily harm, he said, but it can still cause havoc to humanity as a whole.

“A certain amount of error is built into the system in every level of AI. Things work on a statistical basis and they have errors but, on the whole, that’s innocent,” he said. “Algorithms are not capable of hurting people directly. But once it comes to money or it comes to your health or your legal standing, (the potential for errors) is becoming increasingly serious.”

In spite of most people’s image of the future of artificial intelligence, that danger is significantly different than the perils depicted on the big screen, Kornai said. To illustrate that point, he highlighted the gap between algorithmic AI and the state of robotics. While technology has already developed a chess algorithm that can beat the best chess players in the world, a ping-pong playing robot that can beat the world’s best table tennis player has yet to materialize.

“The primary worry is everyday, ubiquitous algorithms, the kind of algorithms that are already around us, posing huge damage,” Kornai said. “This isn’t the Terminator coming along and killing humans. That’s more science fictional.”

Looking to the future, Kornai sees AI making the biggest inroads in the business world. Again, he noted that use of those everyday algorithms may not be widely noticed, but their impact will be significant.

“In the business world today, it’s much easier to start a company and those companies will increasingly be driven by AI,” he said. “Eventually, AI will play a bigger role in the boardroom. It may not be visible to the man on the street, but it will be very visible to the Fortune 500.”

That said, however, there are still broader risks ahead as AI advances, and Kornai said he generally agrees with the concerns that have been voiced of late by Hawking, Gates, Musk and others. Those perils might not jibe with Hollywood’s idea of them, but the effects will still be notable.

“These guys see what’s going on and are doing some far-sighted (thinking). Far-sighted is not science fictional,” Kornai said. “Far-sighted is thinking ahead maybe 10, 15 or 25 years ahead. We’re not talking about affecting our grandchildren, but things that will affect us and increasingly affect our children and grandchildren.”

Global Scenarios and National Workshops to Address Future Work/Technology Dynamics are being scheduled by The Millennium Project | PRWeb

“The nature of work, employment, jobs, and economics will have to change over the next 35 years, or the world will face massive unemployment by 2050. This was a key conclusion of the Future Work/Technology 2050 study published in the “2015−16 State of the Future.”

Read more

Does The Potential of Automation Outweigh The Perils?

These days, it’s not hard to find someone predicting that robots will take over the world and that automation could one day render human workers obsolete. The real debate is over whether or not the benefits do or do not outweigh the risks. Automation Expert and Author Dr. Daniel Berleant is one person who is more often on the side of automation.

There are many industries that are poised to be affected by the oncoming automation boom (in fact, it’s a challenge to think of one arena that will not in some minimal way be affected). “The government is actually putting quite a bit of money into robotic research for what they call ‘cooperative robotics,’” Berleant said. “Currently, you can’t work near a typical industrial robot without putting yourself in danger. As the research goes forward, the idea is (to develop) robots that become able to work with people rather than putting them in danger.”

While many view industrial robotic development as a menace to humanity, Berleant tends to focus on the areas where automation can be a benefit to society. “The civilized world is getting older and there are going to be more old people,” he said. “The thing I see happening in the next 10 or 20 years is robotic assistance to the elderly. They’re going to need help, and we can help them live vigorous lives and robotics can be a part of that.”

Berleant also believes that food production, particularly in agriculture, could benefit tremendously from automation. And that, he says, could have a positive effect on humanity on a global scale. “I think, as soon as we get robots that can take care of plants and produce food autonomously, that will really be a liberating moment for the human race,” Berleant said. “Ten years might be a little soon (for that to happen), maybe 20 years. There’s not much more than food that you need to survive and that might be a liberating moment for many poor countries.”

Berleant also cites the automation that’s present in cars, such as anti-lock brakes, self-parking ability and the nascent self-driving car industry, as just the tip of the iceberg for the future of automobiles. “We’ve got the technology now. Once that hits, and it will probably be in the next 10 years, we’ll definitely see an increase in the autonomous capabilities of these cars,” he said. “The gradual increase in intelligence in the cars is going to keep increasing and my hope is that fully autonomous cars will be commonplace within 10 years.”

Berleant says he can envision a time when the availability of fleets of on-demand, self-driving cars reduces the need for automobile ownership. Yet he’s also aware of the potential effects of that reduced car demand on the automobile manufacturing industry; however, he views the negative effect created by an increase in self-driving cars as outweighed by the potential time-saving benefits and potential improvements in safety.

“There is so much release of human potential that could occur if you don’t have to be behind the wheel for the 45 minutes or hour a day it takes people to commute,” Berleant said. “I think that would be a big benefit to society!”

His view of the potential upsides of automation doesn’t mean that Berleant is blind to the perils. The risks of greater productivity from automation, he believes, also carry plenty of weight. “Advances in software will make human workers more productive and powerful. The flipside of that is when they actually improve the productivity to the point that fewer people need to be employed,” he said. “That’s where the government would have to decide what to do about all these people that aren’t working.”

Cautious must also be taken in military AI and automation, where we have already made major progress. “The biggest jump I’ve seen (in the last 10 years) is robotic weaponry. I think military applications will continue to increase,” Berleant said. “Drones are really not that intelligent right now, but they’re very effective and any intelligence we can add to them will make them more effective.”

As we move forward into a future increasingly driven by automation, it would seem wise to invest in technologies that provide more benefits to society i.e. increased wealth, individual potential, and access to the basic necessities, and to slowly and cautiously (or not at all) develop those automated technologies that pose the greatest threat for large swaths of humanity. Berleant and other like-minded researchers seem to be calling for progressive common sense over a desire to simply prove that any automation (autonomous weapons being the current hot controversy) can be achieved.