Toggle light / dark theme

A Lifeboat guest editorial

Richelle Ross-sRichelle Ross is a sophomore at the University of Florida, focusing on statistics and data science. As a crypto consultant, she educates far beyond the campus. Her insight on the evolution and future of Bitcoin has been featured in national publications. Richelle writes for CoinDesk, LinkedIn, and Quora, providing analysis on Bitcoin’s evolving economy.


In 2003, I remember going to see my first IMAX 3D film,
Space Station . My family was touring NASA at Cape Canaveral Florida. The film was an inside view into life as an astronaut enters space. As the astronauts tossed M&Ms to each other in their new gravity-free domain, the other children and space_station_1I gleefully reached our hands out to try and touch the candy as it floated towards us. I had never experienced anything so mind-blowing in my 7 year life. The first 3D film was released in 1922. Yet, surprisingly, flat entertainment has dominated screens for in the 9½ decades that followed. Only a handful of films have been released in 3D—most of them are animated. But now, we are gradually seeing a shift in how people experience entertainment. As methods evolve and as market momentum builds, it promises to be one of the most groundbreaking technologies of the decade. I foresee Virtual Reality reaching a point where our perception of virtual and real-life experiences becomes blurred—and eventually—the two become integrated.

Ever since pen was put to paper, and camera to screen, audiences have enjoyed being swept into other worlds. For those of us “dreamers” being able to escape into these stories is one way we live through and expand our understanding of other times and places—even places space_station_2that may not be accessible in our lifetimes. Virtual reality is the logical progression and natural evolution of these experiences.

I caught the VR bug after one of my Facebook contacts was posting about it and sharing 360 degree videos that were of no use to me unless I too had the headset. Having been a Samsung user for the last several years, I purchased the Samsung VR headset to understand what all the hype was. Just as with my childhood experience visiting the space station, the VR Introduction video sent me floating across the universe. But this time, it was much more compelling. I could turn my head in any direction and experience a vast heavenly realm in 3D vision and tied to my own movements. Behind me was a large planet and in front were dozens of asteroids slowly moving by.

Similar to visiting the Grand Canyon, this is one of those novel experiences you really have to experience to appreciate. Within about ten seconds of trying it out, I had become hooked. I realized that I was experiencing something with far greater potential than an amusement park roller coaster, yet I also recognized that any applications I might imagine barely scratch the surface. This unexpected adrenaline rush is what leads tinkerers to the imaginative leaps that push new technologies into the next decades ahead.

Video games are probably the industry everyone thinks of being affected by this new paradigm. I immediately thought about the Star Wars franchise with its ever expanding universe. It will be a pretty exciting day when you can hold a lightsaber hilt that comes to life when you wear a headset and allows you to experience that universe from your living room. You could even wear a sensored body suit that allows you to feel little zaps or vibrations during gameplay. With more connected devices, the possibility of Li-Fi replacing Wi-Fi and so on, video games are just scratching the surface.

I discussed what the future of VR could offer with Collective Learning founder, Dan Barenboym. We explored various difficulties that impede market adoption. Barenboym was an early enthusiast of virtual reality, having worked with a startup that plans to deploy full-body scanners that give online life to gamers. The project began long before the film Avatar. Berenboym suggests ways that this would improve online shopping dan_barenboym_5624sby allowing people to see their avatar with their own personal measurements in various outfits. This doesn’t have to be limited to at-home experiences though. Dan suggests that instead of walking into the boutique changing room, you walk into one with mirrors connected to VR software. Your reflection ‘tries on’ different virtual outfits before you pull your favorite one off the store rack.

We also discussed the current obstacles of VR like the headset itself, which is a hindrance in some respects as it is a bit uncomfortable to wear for prolonged use. The other looming issue is money. There are many ideas similar to the ones we brainstormed, but startups may struggle to get off the ground without sufficient funding. The Oculus Rift is one great example of how crowdfunding can help entrepreneurs launch their ideas. It is easier than ever before to share and fund great ideas through social networking.

Facebook creator, Mark Zuckerberg, shared his own vision in 2014 after acquiring the Oculus Rift. Zuckerberg eloquently summarized the status of where we’re headed:

Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it oculus_rift-transtogether.”

What could this mean for the social networking that Zuckerberg pioneered? I’d venture to say the void of a long distance relationship may be eased with VR immersion that allows you to be with your family at the click of a button. You could be sitting down in your apartment in the U.S., but with the help of a 360 camera, look around at the garden that your mother is tending to in the U.K. The same scenario could be applied to a classroom or business meeting. We already have global and instant communication, so it will serve to add an enriched layer to these interactions.

The concept of reality itself is probably the biggest factor that makes virtual reality so captivating. Reality is not an objective experience. Each of us has a perspective of the world that is colored by our childhood experiences, personality, and culture. Our inner dialogues, fantasies of who we want to become, and areas of intelligence determine so much of what we’re able to accomplish and choose to commit to outside of ourselves. Michael Abrash describes how VR works with our unconscious brain perceptions to make us believe we’re standing on the edge of a building that isn’t really there. At a conscious level, we accept that we are staring at a screen, but our hearts still race—based on an unconscious perception of what is happening. Tapping into this perception-changing part of our brain allows us to experience reality in new ways.

As VR becomes more mainstreamed and incorporated into all areas of our lives such as online shopping, socializing, education, recreation, etc., the degrees of separation from the real world that society applies to it will lessen. Long-term, the goal for VR would be to allow us to use any of our senses and body parts. We should see continued improvements in the graphics and interaction capabilities of VR, allowing for these experiences to feel as real as they possibly can.

One can only imagine the new vistas this powerful technology will open—not just for entertainment, but for education, medicine, working in hazardous environments or controlling machines at a distance. Is every industry planning to incorporate the positive potential of virtual reality? If not, they certainly should think about the potential. As long as we pay attention to present day needs and issues, engineering virtual reality in the Internet of Things promises to be a fantastic venture.

Author’s Note:

Feedback from Lifeboat is important. I’ll be back from time to time. Drop me a note on the comment form, or better yet, add your comment below. Until then, perhaps we will meet in the virtual world.
— RR

As recently as 50 years ago, psychiatry lacked a scientific foundation, the medical community considered mental illness a disorder of the mind, and mental patients were literally written off as “sick in the head.” A fortunate turn in progress has yielded today’s modern imaging devices, which allow neuroscientists and psychiatrists to examine the brain of an individual suffering from a mental disorder and provide the best treatment options. In a recent interview, Columbia University Psychiatry Chair Dr. Jeffrey Lieberman stated that new research into understanding the mind is growing at an accelerated pace.

(iStock)
(iStock)

Lieberman noted that, just as Galileo couldn’t prove heliocentrism until he had a telescope, psychiatry lacked the technological sophistication, tools, and instruments necessary to get an understanding of the brain until the 1950s. It wasn’t until the advent of psychopharmacology and neuroimaging, he said, that researchers could look inside the so-called black box that is the brain.

“(It began with) the CAT scan, magnetic resonance imaging (MRI) systems, positron emission tomography (PET scans) and then molecular genetics. Most recently, the burgeoning discipline of neuroscience and all of the methods within, beginning with molecular biology and progressing to optogenetics, this capacity has given researchers the ability to deconstruct the brain, understand its integral components, its mechanisms of action and how they underpin mental function and behavior,” Lieberman said. “The momentum that has built is almost like Moore’s law with computer chips, (and) you see this increasing power occurring with exponential sort of growth.”

Specifically, the use of MRIs and PET scans has allowed researchers to study the actual functional activity of different circuits and regions of the brain, Lieberman noted. Further, PET scans provided a look at the chemistry of the brain, which has allowed for the development of more sophisticated pathological theories. These measures, he said, were used to develop treatments while also allowing measurement of the effectiveness of both medication-based therapies and psychotherapies.

As an example, Lieberman cited the use of imaging in the treatment of post-traumatic stress disorder (PTSD). The disorder, a hyperarousal that chronically persists even in the absence of threatening stimulation, is treated through a method called desensitization. Over time, researchers have been able to fine-tune the desensitization therapies and treatments by accessing electronic images of the brain, which can show if there’s been a reduction in the activation of the affected amygdala.

Lieberman noted that despite progress in this area, technology has not replaced interaction with the individual patient; however, as technology continues to evolve, he expects the diagnoses of mental disorders to be refined.

“By the use of different technologies including genetics (and) imaging, including electrophysiological assessments, which are kind of EEG based, what we’ll have is one test that can confirm conditions that were previously defined by clinical description of systems,” Lieberman said. “I think, of all the disciplines that will do this, genetics will be the most informative.”

Just as genetics is currently used to diagnose cancer using anatomy and histology, Lieberman said the expanding field is helping researchers distinguish mental illness in individuals with certain genetic mutations. He expects that in the future, doctors will use “biochips” to routinely screen patients and provide a targeted therapy against the gene or gene product. These chips will have panels of genes known to be potentially associated with the risk for mental illness.

“Someone used the analogy of saying the way we treat depression now is as if you needed to put coolant into your car. Instead of putting it into the radiator, you just dump it on the engine,” he said. “So genetics will probably be the most powerful method to really tailor to the individual and use this technique of precision and personalized medicine.”

Lieberman also sees additional promise in magnetic stimulation, deep brain stimulation through the surgical implanting of electrodes, and optogenetics. Though he has plenty of optimism for these treatments and other potential treatments for mental illness, much of their continued growth may hinge on government policy and budgets. Recent coverage of gun violence in the United States, and a public call for better means by which to screen individuals for mental health inflictions, may be an unfortunate catalyst in moving funding forward in this research arena. A recent article from the UK’s Telegraph discusses Google’s newfound interest in this research, with former US Head of the National Institute of Mental Health now in a position at Google Life Sciences.

“Science, technology and healthcare are doing very well, but when it comes to the governmental process, I think we’re in trouble,” he said. “A welcome development in this regard is President Obama’s Human Brain Initiative, which if you look at the description of it, (is) basically to develop new tools in neurotechnology that can really move forward in a powerful way of being able to measure the function of the brain. Not by single cells or single circuits, but by thousands or tens of thousands of cells and multiple circuits simultaneously. That’s what we need.”

Ask the average passerby on the street to describe artificial intelligence and you’re apt to get answers like C-3PO and Apple’s Siri. But for those who follow AI developments on a regular basis and swim just below the surface of the broad field , the idea that the foreseeable AI future might be driven more by Big Data rather than big discoveries is probably not a huge surprise. In a recent interview with Data Scientist and Entrepreneur Eyal Amir, we discussed how companies are using AI to connect the dots between data and innovation.

Image credit: Startup Leadership Program Chicago
Image credit: Startup Leadership Program Chicago

According to Amir, the ability to make connections between big data together has quietly become a strong force in a number of industries. In advertising for example, companies can now tease apart data to discern the basics of who you are, what you’re doing, and where you’re going, and tailor ads to you based on that information.

“What we need to understand is that, most of the time, the data is not actually available out there in the way we think that it is. So, for example I don’t know if a user is a man or woman. I don’t know what amounts of money she’s making every year. I don’t know where she’s working,” said Eyal. “There are a bunch of pieces of data out there, but they are all suggestive. (But) we can connect the dots and say, ‘she’s likely working in banking based on her contacts and friends.’ It’s big machines that are crunching this.”

Amir used the example of image recognition to illustrate how AI is connecting the dots to make inferences and facilitate commerce. Many computer programs can now detect the image of a man on a horse in a photograph. Yet many of them miss the fact that, rather than an actual man on a horse, the image is actually a statue of a man on a horse. This lack of precision in analysis of broad data is part of what’s keep autonomous cars on the curb until the use of AI in commerce advances.

“You can connect the dots enough that you can create new applications, such as knowing where there is a parking spot available in the street. It doesn’t make financial sense to put sensors everywhere, so making those connections between a bunch of data sources leads to precise enough information that people are actually able to use,” Amir said. “Think about, ‘How long is the line at my coffee place down the street right now?’ or ‘Does this store have the shirt that I’m looking for?’ The information is not out there, but most companies don’t have a lot of incentive to put it out there for third parties. But there will be the ability to…infer a lot of that information.”

This greater ability to connect information and deliver more precise information through applications will come when everybody chooses to pool their information, said Eyal. While he expects a fair bit of resistance to that concept, Amir predicts that there will ultimately be enough players working together to infer and share information; this approach may provide more benefits on an aggregate level, as compared to an individual company that might not have the same incentives to share.

As more data is collected and analyzed, another trend that Eyal sees on the horizon is more autonomy being given to computers. Far from the dire predictions of runaway computers ruling the world, he sees a ‘supervised’ autonomy in which computers have the ability to perform tasks using knowledge that is out-of-reach for humans. Of course, this means developing a sense trust and allowing the computer to make more choices for us.

“The same way that we would let our TiVo record things that are of interest to us, it would still record what we want, but maybe it would record some extras. The same goes with (re-stocking) my groceries every week,” he said. “There is this trend of ‘Internet of Things,’ which brings together information about the contents of your refrigerator, for example. Then your favorite grocery store would deliver what you need without you having to spend an extra hour (shopping) every week.”

On the other hand, Amir does have some potential concerns about the future of artificial intelligence, comparable to what’s been voiced by Elon Musk and others. Yet he emphasizes that it’s not just the technology we should be concerned about.

“At the end, this will be AI controlled by market forces. I think the real risk is not the technology, but the combination of technology and market forces. That, together, poses some threats,” Amir said. “I don’t think that the computers themselves, in the foreseeable future, will terminate us because they want to. But they may terminate us because the hackers wanted to.”

I administer the Bitcoin P2P discussion group at LinkedIn, a social media network for professionals. A frequent question posed by newcomers and even seasoned venture investors is: “How can I understand Bitcoin in its simplest terms?”

Engineers and coders offer answers that are anything but simple. Most focus on mining and the blockchain. In this primer, I will take an approach that is both familiar and accurate…

Terms/Concepts: Miners Blockchain Double-Spend

First, forget about everything you have heard about ‘mining’ Bitcoin. That’s just a temporary mechanism to smooth out the initial distribution and make it fair, while also playing a critical role in validating the transactions between individuals. Starting with this mechanism is a bad way to understand Bitcoin, because its role in establishing value, influencing trust or stabilizing value is greatly overrated.

The other two terms are important to a basic understanding of Bitcoin and why it is different, but let’s put aside jargon and begin with the familiar. Here are three common analogies for Bitcoin. #1 is the most typical impression pushed by the media, but it is least accurate. Analogy #3 is surprisingly on target.

1. Bitcoin as Gold

You can think of Bitcoin as a natural asset, but with a firm, capped supply. Like gold, the asset is a limited commodity that a great many people covet. But unlike gold, the supply is completely understood and no one organization or country has the potential to suddenly discover a rich vein and extract it from the ground.

2. Bitcoin as a Debit or Gift Card

Bitcoin is also a little like a prepaid debit card, you can exchange cash for it and then use it to buy things—either locally (subject to growing recognition and acceptance) or across the Internet. But here, too, there is a difference. A debit card must be loaded with a prepaid balance. That is, it must be backed by something else, whereas Bitcoin has an intrinsic value based on pure market supply and demand. A debit card is a vehicle to transmit or pay money—but Bitcoin is the money itself.

3. Bitcoin as a Foreign Currency

Perhaps the most accurate analogy for Bitcoin (or at least where it is headed), is as a fungible, convertible, bankable foreign currency.

Like a foreign currency, Bitcoin can be…

  • Easily exchanged with cash
  • Easily transmitted for purchases, sales, loans or gifts
  • Stored & saved in an online account or in your mattress (Advantage: It can also be stored in a smart phone or in the cloud—and it can backed up!)
  • Has a value that floats with market conditions
  • Is backed by something even more trustworthy than a national government

Unlike the cash in your pocket or bank account, Your Bitcoin wallet can be backed up with a mouse click. And, with proper attention to best practices, it will survive the failure of any exchange, bank or custodian. That is with proper key management and the use of multisig, no one need lose money when a Bitcoin exchange fails. The trauma of past failures was exacerbated by a lack of tools, practices and user understanding. These things are all improving with each month.

So, Whats the Big Deal?

So, Bitcoin is a lot like cash or a debit card. Why is this news? Bitcoin is a significant development, because the creator has devised a way to account for moving money between buyer and seller (or any two parties) that does not require any central bank, bookkeeper or authority to keep tabs. Instead, the bookkeeping is crowd sourced.

For example, let’s say that Alice wants to purchase a $4 item from Bob, an Internet merchant in another country.

a) Purchase and settlement with a credit card

With a credit card, wire transfer or check, Alice can pay $4 easily. But many things occur in the background and they represent an enormous transaction overhead. Alice must have an account at an internationally recognized bank. The bank must vouch for Alice’s balance or credit in real time and it must then substitute its own credit for hers. After the transaction, two separate banks at opposite ends of the world must not only adjust their client account balances, they must also settle their own affairs through an interbank-settlement process.

The two banks use different national currencies and are subject to different laws, oversight and reporting requirements. Over the course of the next few days, the ownership of gold, oil or reserve currencies is transferred between large institutions to complete the affairs of Alice’s $4 purchase.

b) Now, consider the same transaction with Bitcoin

Suppose that Alice has a Bitcoin wallet with a balance equal to $10. Let’s say that these characters represent $10 in value: 5E 7A 44 1B. (Bitcoin value is expressed as a much longer character string, but for this illustration we are keeping it short). Alice wants to buy a $4 item from Bob. Since she has only this one string representing $10, she must somehow get $6 in change.

Bitcoin Transaction

With Bitcoin, there is no bank or broker at the center of a transaction. The transaction is effected directly between Alice and Bob. But there is a massive, distributed, global network of bookkeepers standing ready to help Alice and Bob to complete the transaction. They don’t even know the identities of Alice or Bob, but they are like a bank and independent auditor at the same time…

If Alice were to give Bob her secret string (worth $10), and if Bob gives her a string of characters worth $6 as change, one wonders what prevents Alice from double-spending her original $10 secret? But this can’t happen, because the miners and their distributed blockchain are the background fabric of the ecosystem. In the Bitcoin world, Alice is left with a brand new secret string that represents her new bank balance. It can be easily tested by anyone, anywhere. It is worth exactly $6.

This example is simplified and without underlying detail. But the facts, as stated, are accurate.

Conclusion

For Geeks, Bitcoin is the original implementation of a blockchain distributed ledger. Miners uncover a finite reserve of hidden coins while validating the transactions of strangers. As such, Bitcoin solves the double spend problem and enables person-to-person transactions without the possibility of seizure or choke points.

But for the rest of us, Bitcoin offers a very low cost transaction network that will quickly replace checks and debit cards and may eventually replace cash, central banks, and regional monetary authorities. The safeties, laws and refund mechanisms offered by banks and governments can still be applied to Bitcoin for selected transactions (whenever both parties agree to oversight), but the actual movement of value will be easier, less expensive and less susceptible to 3rd party meddling.

  • Bitcoin is a distributed, decentralized and low cost payment network
  • It is adapted to a digital economy in a connected world: fluid & low friction, trusted, secure
  • More zealous proponents (like me) believe that is gradually becoming the value itself (i.e. it needn’t be backed by assets, a promise of redemption, or a national government. In this sense, it is like a very stable, foreign currency

Additional Reading:

Philip Raymond sits on Lifeboat’s New Money Systems Board and administers Bitcoin P2P, a LinkedIN community. He is co-chair of CRYPSA and host of The Bitcoin Event. He writes for Lifeboat, Quora, Sophos and Wild Duck.

Recently, I was named Most Viewed Writer on Bitcoin and cryptocurrency at Quora.com (writing under the pen name, “Ellery”). I don’t typically mirror posts at Lifeboat, but a question posed today is Quora_Most_Viewed_splashrelevant to my role on the New Money Systems board at Lifeboat. Here, then, is my reply to: “How can governments ban Bitcoin?”


Governments can enact legislation that applies to any behavior or activity. That’s what governments do—at least the legislative arm of a government. Such edicts distinguish activities that are legal from those that are banned or regulated.

You asked: “How can governments ban Bitcoin?” But you didn’t really mean to ask in this way. After all, legislators ban whatever they wish by meeting in a congress or committee and promoting a bill into law. In the case of a monarchy or dictatorship, the leader simply issues an edict.

So perhaps, the real question is “Can a government ban on Bitcoin be effective?”

Some people will follow the law, no matter how nonsensical, irrelevant, or contrary to the human condition. These are good people who have respect for authority and a drive toward obedience. Others will follow laws, because they fear the cost of breaking the rules and getting caught. I suppose that these are good people too. But, overall, for a law to be effective, it must address a genuine public need (something that cries out for regulation), it must not contradict human nature, and it must address an activity that is reasonably open to observation, audit or measurement.

Banning Bitcoin fails all three test of a rational and enforceable law.

Most governments, including China and Italy, realize that a government ban on the possession of bits and bytes can be no more effective than banning feral cats from mating in the wild or legislating that basements shall remain dry by banning ground water from seeking its level.

So, the answer to the implied question is: A ban on Bitcoin could never be effective.

For this reason, astute governments avoid the folly of enacting legislation to ban Bitcoin. Instead, if they perceive a threat to domestic policy, tax compliance, monetary supply controls or special interests, they discourage trading by discrediting Bitcoin or raising concerns over safety, security, and criminal activity. In effect, a little education, misinformation or FUD (fear, uncertainty and doubt) can sometimes achieve what legislation cannot.

Reasons to Ban Bitcoin … a perceived threat to either:

  • domestic policy
  • tax compliance
  • monetary supply controls
  • special interests

Methods to Discourage Trading (rather than a ban)

  • Discredit Bitcoin (It’s not real money)
  • Raise concerns over safety & security
  • Tie its use to criminal activity

Avoiding both a ban—and even official discouragement

There is good news on the horizon. In a few countries—including the USA—central bankers, monetary czars and individual legislators are beginning to view Bitcoin as an opportunity rather than a threat. Prescient legislators are coming to the conclusion that a distributed, decentralized trading platform, like Bitcoin, does not threaten domestic policy and tax compliance—even if citizens begin to treat it as cash rather than a payment instrument. While a cash-like transition might ultimately undermine the federal reserve monetary regime and some special interests, this is not necessarily a bad thing—not even for the affected “interests”.

If Bitcoin graduates from a debit/transmission vehicle (backed by cash) to the cash itself, citizens will develop more trust and respect for their governments. Why? Because their governments will no longer be able to water down citizen wealth by running the printing press, nor borrow against unborn generations. Instead, they will need to collect every dollar that they spend or convince bond holders that they can repay their debts. They will need to balance their checkbooks, spend more transparently and wear their books on their sleeves. All good things.

Naturally, this type of change frightens entrenched lawmakers. The idea of separating a government from its monetary policy seems—well—radical! But this only because we have not previously encountered a technology that placed government accountability and transparency on par with the private sector requirement to keep records and balance the books. [continue below image]…

What backs your currency? Is it immune from hyperinflation?

What backs your currency? Is it immune from hyperinflation?

Seven sovereign countries use the US Dollar as their main currency. Why? Because the government of these countries were addicted to spending which leads to out-of-control inflation. They could not convince citizens that they could wean themselves of the urge to print bank notes with ever increasing zeros. And so, by switching to the world’s reserve currency, they demonstrate a willingness to settle debts with an instrument that cannot be inflated by edict, graft or sloppy bookkeeping.

But here’s the problem: Although the US dollar is more stable than the Zimbabwe dollar, this is a contest in relative trust and beating the clock. The US has a staggering debt that is sustained only by our creditors’ willingness to bear the float. Like Zimbabwe, Argentina, Greece and Germany between the wars, our lawmakers raise the debt ceiling with a lot of bluster, but nary a thought.

Is there a way to instill confidence in a way that is both trustworthy and durable? Yes! —And it is increasingly likely that Bitcoin is the way to the trust and confidence that is so sorely needed.

Philip Raymond sits on the New Money Systems board. He is also co-chair of Cryptocurrency Standards Association and editor at A Wild Duck.

Ask just about anyone on the street to describe artificial intelligence and odds are, they’ll describe something resembling the futuristic science fiction robot they’ve seen in movies and television shows. However, according to Mathematician, Linguist and Artificial Intelligence Researcher Dr. András Kornai, artificial intelligence is a reality right now, and its impact can be seen every day.

“I’d say 35 percent of the total commerce taking place on Wall Street (right now) is driven by algorithms and it’s no longer driven by humans,” Kornai said. “This is not science fiction. (Artificial intelligence) is with us today.”

What we’ve seen so far in the application of algorithm-based artificial intelligence in the financial sector is just the tip of the iceberg, Kornai said. In fact, you don’t even have to own stock to be affected by it.

“I have designed algorithms that will (determine) your creditworthiness, meaning your creditworthiness is now determined by an algorithm,” he said. “We have substituted human-decision making capabilities in favor of better algorithms to pursue this, and we have given up a huge area of human competence, and money is just one aspect of it.”

Kornai points to advances in algorithm-based medical diagnostics, autonomous cars and military technology as some other areas where artificial intelligence is already at work and poised for further growth. While that growth is presented as a good thing, he believes the subtle infiltration of AI has many people missing the larger picture.

“We are seeing an uptick in medical decisions by algorithms and I’m not opposed to this, as it’s important to have the best possible information in the medical world. And in 10 or 15 years autonomous vehicles will be a big deal,” Kornai said. “In military technology, drones are generally human controlled, but there is intense research toward autonomous ground or air vehicles that will work even if someone is trying to cut off their communication. This is not the future, this is here now.”

According to Kornai, since algorithms are based on statistics, the problem with algorithm-based advances in those areas is the level of error that is inherent to the system. That built-in error may not be able to cause bodily harm, he said, but it can still cause havoc to humanity as a whole.

“A certain amount of error is built into the system in every level of AI. Things work on a statistical basis and they have errors but, on the whole, that’s innocent,” he said. “Algorithms are not capable of hurting people directly. But once it comes to money or it comes to your health or your legal standing, (the potential for errors) is becoming increasingly serious.”

In spite of most people’s image of the future of artificial intelligence, that danger is significantly different than the perils depicted on the big screen, Kornai said. To illustrate that point, he highlighted the gap between algorithmic AI and the state of robotics. While technology has already developed a chess algorithm that can beat the best chess players in the world, a ping-pong playing robot that can beat the world’s best table tennis player has yet to materialize.

“The primary worry is everyday, ubiquitous algorithms, the kind of algorithms that are already around us, posing huge damage,” Kornai said. “This isn’t the Terminator coming along and killing humans. That’s more science fictional.”

Looking to the future, Kornai sees AI making the biggest inroads in the business world. Again, he noted that use of those everyday algorithms may not be widely noticed, but their impact will be significant.

“In the business world today, it’s much easier to start a company and those companies will increasingly be driven by AI,” he said. “Eventually, AI will play a bigger role in the boardroom. It may not be visible to the man on the street, but it will be very visible to the Fortune 500.”

That said, however, there are still broader risks ahead as AI advances, and Kornai said he generally agrees with the concerns that have been voiced of late by Hawking, Gates, Musk and others. Those perils might not jibe with Hollywood’s idea of them, but the effects will still be notable.

“These guys see what’s going on and are doing some far-sighted (thinking). Far-sighted is not science fictional,” Kornai said. “Far-sighted is thinking ahead maybe 10, 15 or 25 years ahead. We’re not talking about affecting our grandchildren, but things that will affect us and increasingly affect our children and grandchildren.”

gI_115935_2015-16SOF-cover

“The nature of work, employment, jobs, and economics will have to change over the next 35 years, or the world will face massive unemployment by 2050. This was a key conclusion of the Future Work/Technology 2050 study published in the “2015−16 State of the Future.”

Read more

These days, it’s not hard to find someone predicting that robots will take over the world and that automation could one day render human workers obsolete. The real debate is over whether or not the benefits do or do not outweigh the risks. Automation Expert and Author Dr. Daniel Berleant is one person who is more often on the side of automation.

There are many industries that are poised to be affected by the oncoming automation boom (in fact, it’s a challenge to think of one arena that will not in some minimal way be affected). “The government is actually putting quite a bit of money into robotic research for what they call ‘cooperative robotics,’” Berleant said. “Currently, you can’t work near a typical industrial robot without putting yourself in danger. As the research goes forward, the idea is (to develop) robots that become able to work with people rather than putting them in danger.”

While many view industrial robotic development as a menace to humanity, Berleant tends to focus on the areas where automation can be a benefit to society. “The civilized world is getting older and there are going to be more old people,” he said. “The thing I see happening in the next 10 or 20 years is robotic assistance to the elderly. They’re going to need help, and we can help them live vigorous lives and robotics can be a part of that.”

Berleant also believes that food production, particularly in agriculture, could benefit tremendously from automation. And that, he says, could have a positive effect on humanity on a global scale. “I think, as soon as we get robots that can take care of plants and produce food autonomously, that will really be a liberating moment for the human race,” Berleant said. “Ten years might be a little soon (for that to happen), maybe 20 years. There’s not much more than food that you need to survive and that might be a liberating moment for many poor countries.”

Berleant also cites the automation that’s present in cars, such as anti-lock brakes, self-parking ability and the nascent self-driving car industry, as just the tip of the iceberg for the future of automobiles. “We’ve got the technology now. Once that hits, and it will probably be in the next 10 years, we’ll definitely see an increase in the autonomous capabilities of these cars,” he said. “The gradual increase in intelligence in the cars is going to keep increasing and my hope is that fully autonomous cars will be commonplace within 10 years.”

Berleant says he can envision a time when the availability of fleets of on-demand, self-driving cars reduces the need for automobile ownership. Yet he’s also aware of the potential effects of that reduced car demand on the automobile manufacturing industry; however, he views the negative effect created by an increase in self-driving cars as outweighed by the potential time-saving benefits and potential improvements in safety.

“There is so much release of human potential that could occur if you don’t have to be behind the wheel for the 45 minutes or hour a day it takes people to commute,” Berleant said. “I think that would be a big benefit to society!”

His view of the potential upsides of automation doesn’t mean that Berleant is blind to the perils. The risks of greater productivity from automation, he believes, also carry plenty of weight. “Advances in software will make human workers more productive and powerful. The flipside of that is when they actually improve the productivity to the point that fewer people need to be employed,” he said. “That’s where the government would have to decide what to do about all these people that aren’t working.”

Cautious must also be taken in military AI and automation, where we have already made major progress. “The biggest jump I’ve seen (in the last 10 years) is robotic weaponry. I think military applications will continue to increase,” Berleant said. “Drones are really not that intelligent right now, but they’re very effective and any intelligence we can add to them will make them more effective.”

As we move forward into a future increasingly driven by automation, it would seem wise to invest in technologies that provide more benefits to society i.e. increased wealth, individual potential, and access to the basic necessities, and to slowly and cautiously (or not at all) develop those automated technologies that pose the greatest threat for large swaths of humanity. Berleant and other like-minded researchers seem to be calling for progressive common sense over a desire to simply prove that any automation (autonomous weapons being the current hot controversy) can be achieved.

Encapsulation Pictures

Fear of scientists “playing god” is at the centre of many a plot line in science fiction stories. Perhaps the latest popular iteration of the story we all love is Jurassic World (2015), a film I find interesting only for the tribute it paid to the original Michael Crichton novel and movie Jurassic Park.

Full op-ed from h+ Magazine on 7 October 2015 http://hplusmagazine.com/2015/10/07/opinion-synthetic-biolog…f-mankind/

john hammond jurrasic parkIn Jurassic Park, a novel devoted to the scare of genetic engineering when biotech was new in the 1990s, the character of John Hammond says:

“Would you make products to help mankind, to fight illness and disease? Dear me, no. That’s a terrible idea. A very poor use of new technology. Personally, I would never help mankind.”

What the character is referring to is the lack of profit in actually curing diseases and solving human needs, and the controversy courted just by trying to get involved in such development. The goal to eradicate poverty or close the wealth gap between rich and poor nations offers no incentive for a commercial company.

Instead, businesses occupy themselves with creating entertainment, glamour products and perfume, new pets, and other superfluities that biotech can inevitably offer. This way, the companies escape not only moral chastisement for failing to share their technology adequately or make it freely available, but they can also attach whatever price tag they want without fear of controversy.

It is difficult for a well-meaning scientist or engineer to push society towards greater freedom and equality in a single country. It is even harder for such a professional to effect a great change over the whole world or improve the human condition the way transhumanists, for example, have intended.

Although discovery and invention continue to stun us all on an almost daily basis, such things do not happen as quickly or in as utilitarian a way as they should. And this lack of progress is deliberate. As the agenda is driven by businessmen who adhere to the times they live in, driven more by the desire for wealth and status than helping mankind, the goal of endless profit directly blocks the path to abolish scarcity, illness and death.

Today, J. Craig Venter’s great discoveries of how to sequence or synthesize entire genomes of living biological specimens in the field of synthetic biology (synthbio) represent a greater power than the hydrogen bomb. It is a power we must embrace. In my opinion, these discoveries are certainly more capable of transforming civilization and the globe for the better. In Life at the Speed of Light(2013), that is essentially Venter’s own thesis.

And contrary to science fiction films, the only threat from biotech is that humans will not adequately and quickly use it. Business leaders are far more interested in profiting from people’s desire for petty products, entertainment and glamour than curing cancer or creating unlimited resources to feed civilization. But who can blame them? It is far too risky for someone in their position to commit to philanthropy than to stay a step ahead of their competitors.

Even businessmen who later go into philanthropy do very little other than court attention in the press and polish the progressive image of the company. Of course, transitory deeds like giving food or clean water to Africans will never actually count as developing civilization and improving life on Earth, when there are far greater actions that can be taken instead.

It is conspicuous that so little has been done to develop the industrial might of poor countries, where schoolchildren must still live and study without even a roof over their heads. For all the unimaginable destruction that our governments and their corporate sponsors unleash on poor countries with bombs or sanctions when they are deemed to be threatening, we see almost no good being done with the same scientific muscle in poor countries. Philanthropists are friendly to the cause of handing out food or money to a few hungry people, but say nothing of giving the world’s poor the ability to possess their own natural resources and their own industries.

Like our bodies, our planet is no longer a sufficient vehicle for human dreams and aspirations. The biology of the planet is too inefficient to support the current growth of the human population. We face the prospect of eventually perishing as a species if we cannot repair our species’ oft-omitted disagreements with nature over issues of sustainability, congenital illness and our refusal to submit to the cruelties of natural selection from which we evolved.

Once we recognize that the current species are flawed, we will see that only by designing and introducing new species can suffering, poverty and the depletion of natural resources be stopped. Once we look at this option, we find already a perfect and ultimately moral solution to the threats of climate change, disease, overpopulation and the terrible scarcity giving rise to endless injustice and retaliatory terrorism.

The perfect solution could only be brought to the world by a heroic worker in the fields of biotech and synthetic biology. Indeed, this revolution may already be possible today, but fear is sadly holding back the one who could make it happen.

Someone who believes in changing the human animal with technology must believe in eradicating poverty, sickness and injustice with technology. For all our talk of equality and human rights in our rhetoric, the West seems determined to prevent poorer countries from possessing their own natural resources. A right guaranteed by the principles of modernization and industrialization, which appears to have been forgotten. Instead, we prefer to watch them being nursed by the richer countries’ monopolies, technology, and workers who are there cultivating, extracting, refining, or buying all their resources for them.

So, quite contrary to the promises of modernity, we have replaced the ideal of the industrialization of poor states with instead the vision of refugee camps, crude water wells, and food aid delivered by humanitarian workers to provide only temporary relief. In place of a model of development that was altruistic and morally correct, we instead glorify the image of non-Westerners as primitives who are impossible to help yet still we try.

The world’s poor have become not the focus of attention aimed at helping humanity, but props for philanthropists to make themselves look noble while doing nothing to truly help them. What we should turn to is not a return to the failed UN development agendas of the 1970s, which were flawed, but a new model entirely, and driven by people instead of governments and UN agencies.

It is high time that we act to help mankind altruistically, rather than a select few customers. The engineers and scientists of the world need to abandon the search for profit, if only for a moment. We should call on them to turn their extraordinary talent to the absolute good of abolishing poverty and scarcity. If they do not do this, we will talk about direct action to break free the scientific gifts they refused to share.

We live in courageous times. These are times of whistle-blowers, lone activists for the truth, and lone scientist-entrepreneurs who must be praised even if our profit-driven culture stifles their great works. And although we live in courageous times, we seem not yet brave enough to take real action to overcome the human disaster.

###

Synthetic biology image from https://www.equipes.lps.u-psud.fr/TRESSET/research8.html

(A) Enclosure of three red-fluorescent 200-nm spheres inside a “giant” liposome labeled with DiO. A wideband ultraviolet excitation filter was used for the simultaneous observation of these two differently stained species. Images were digitally postprocessed to balance the colors and to adjust their brightness at an equal level. (B) Trajectories of the particles. They were free to move but did not pass through the membrane. © GFP entrapped by a “giant” liposome. To get rid of noncaptured proteins, the solution was filtered by dialysis in such a way that the fluorescence background level became negligible with respect to the liposome interior. (D) Fluorescence photographs of λ-DNA-loaded liposome. λ-DNA was stained with SYBR Green, while DiI (red emission) was incorporated to liposome membrane. Liposome was observed through a narrow-band blue excitation filter (suitable for SYBR Green). (E) Same as previously with a wideband green excitation filter (suitable for DiI). Because of a low fluorescence response, part D was digitally enhanced in terms of brightness and contrast. In comparison, part E was darkened to present a level similar to part D. These pictures were taken at an interval of ~1 s, just the time to switch the filters. (E) Fluorescence picture of λ-DNA-loaded liposomes. Green dots stand for λ-DNA molecules, and lipids are labeled in red. A wideband blue excitation filter was used for this bicolor imaging, and a high-sensitivity color CCD camera captured it. [Anal. Chem. 77 (2005) 2795]