Toggle light / dark theme

Over the last few years, Google and Coursera have regularly teamed up to launch a number of online courses for developers and IT pros. Among those was the Machine Learning Crash course, which provides developers with an introduction to machine learning. Now, building on that, the two companies are launching a machine learning specialization on Coursera. This new specialization, which consists of five courses, has an even more practical focus.

The new specialization, called “Machine Learning with TensorFlow on Google Cloud Platform,” has students build real-world machine learning models. It takes them from setting up their environment to learning how to create and sanitize datasets to writing distributed models in TensorFlow, improving the accuracy of those models and tuning them to find the right parameters.

As Google’s Big Data and Machine Learning Tech Lead Lak Lakshmanan told me, his team heard that students and companies really liked the original machine learning course but wanted an option to dig deeper into the material. Students wanted to know not just how to build a basic model but also how to then use it in production in the cloud, for example, or how to build the data pipeline for it and figure out how to tune the parameters to get better results.

Read more

Professor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.


P rofessor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.

Erik Brynjolfsson: We heard today about algorithmic bias and about human biases. You are one of the world’s experts on human biases, and you’re writing a new book on the topic. What are the bigger risks — human or the algorithmic biases?

Daniel Kahneman: It’s pretty obvious that it would be human biases, because you can trace and analyze algorithms.

The technical skills of programmer John Carmack helped create the 3D world of Doom, the first-person shooter that took over the world 25 years ago. But it was level designers like John Romero and American McGee that made the game fun to play. Level designers that, today, might find their jobs threatened by the ever-growing capabilities of artificial intelligence.

One of the many reasons Doom became so incredibly popular was that id Software made tools available that let anyone create their own levels for the game, resulting in thousands of free ways to add to its replay value. First-person 3D games and their level design have advanced by leaps and bounds since the original Doom’s release, but the sheer volume of user-created content made it the ideal game for training an AI to create its own levels.

Researchers at the Politecnico di Milano university in Italy created a generative adversarial network for the task, which essentially uses two artificially intelligent algorithms working against each other to optimise the overall results. One algorithm was fed thousands of Doom levels which it analysed for criteria like overall size, enemy placement, and the number of rooms. It then used what it learned to generate its own original Doom levels.

Read more

Time feels real to people. But it doesn’t even exist, according to quantum physics. “There is no time variable in the fundamental equations that describe the world,” theoretical physicist Carlo Rovelli tells Quartz.

If you met him socially, Rovelli wouldn’t assault you with abstractions and math to prove this point. He’d “rather not ruin a party with physics,” he says. We don’t have to understand the mechanics of the universe to go about our daily lives. But it’s good to take a step back every once in a while.

“Time is a fascinating topic because it touches our deepest emotions. Time opens up life and takes everything away. Wondering about time is wondering about the very sense of our life. This is [why] I have spent my life studying time,” Rovelli explains.

Read more

Check out the internal Google film, “The Selfish Ledger”. This probably wasn’t meant to slip onto a public web server, and so I have embedded a backup copy below. Ping me if it disappears. I will locate a permanent URL.

This 8½ minute video is a lot deeper—and possibly more insipid—than it appears. Nick Foster may be the Anti-Christ, or perhaps the most brilliant sociologist of modern times. It depends on your vantage point, and your belief in the potential of user controls and cat-in-bag containment.

He talks of a species propelling itself toward “desirable goals” by cataloging, data mining, and analyzing the past behavior of peers and ancestors—and then using that data to improve the experience of each user’s future and perhaps even their future generations. But, is he referring to shared goals across cultures, sexes and incomes? Who controls the algorithms and the goal filters?! Is Google the judge, arbiter and God?

Consider these quotes from the video. Do they disturb you? The last one sends a chill down my spine. But, I may be overreacting to what is simply an unexplored frontier. The next generation in AI. I cannot readily determine if it ushers in an era of good or bad:

  • Behavioral sequencing « a phrase used throughout the video
  • Viewing human behavior through a Lemarkian lens
  • An individual is just a carrier for the gene. The gene seeks to improve itself and not its host
  • And [at 7:25]: “The mass multigenerational examination of actions and results could introduce a model of behavioral sequencing.”

There’s that odd term again: behavioral sequencing. It suggests that we are mice and that Google can help us to act in unison toward society’s ideal goals.

Today, Fortune Magazine described it this way: “Total and absolute data collection could be used to shape the decisions you make … The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit.”

The statements could apply just as easily to the NSA as it does to Google. At least we are entering into a bargain with Google. We hand them data and they had us numerous benefits (the same benefits that many users often overlook). Yet, clearly, this is heavy duty stuff—especially for the company that knows everything about everyone. Watch it a second time. Think carefully about the power that Google wields.

Don’t get me wrong. I may be in the minority, but I generally trust Google. I recognize that I am raw material and not a client. I accept the tradeoff that I make when I use Gmail, web search, navigate to a destination or share documents. I benefit from this bargain as Google matches my behavior with improved filtering of marketing directed at me.

But, in the back of my mind, I hope for the day that Google implements Blind Signaling and Response, so that my data can only be used in ways that were disclosed to me—and that strengthen and defend that bargain, without subjecting my behavior, relationships and predilections to hacking, misuse, or accidental disclosure.


Philip Raymond sits on Lifeboat’s New Money Systems board. He co-chairs CRYPSA, hosts the Bitcoin Event, publishes Wild Duck and is keynote speaker at global Cryptocurrency Conferences. Book a presentation or consulting engagement.

Credit for snagging this video: Vlad Savov @ TheVerge

WASHINGTON — The U.S. military got is first big taste of artificial intelligence with Project Maven. An Air Force initiative, it began more than a year ago as an experiment using machine learning algorithms developed by Google to analyze full-motion video surveillance.

The project has received high praise within military circles for giving operators in the field instant access to the type of intelligence that typically would have taken a long time for geospatial data analysts to produce.

Project Maven has whetted the military’s appetite for artificial intelligence tools. And this is creating pressure on the National Geospatial-Intelligence Agency to jump on the AI bandwagon and start delivering Maven-like products and services.

Read more

When you take a picture of a cat and Google’s algorithms place it in a folder called “pets,” with no direction from you, you’re seeing the benefit of image recognition AI. The exact same technology is used by doctors to diagnose diseases on a scale never before possible by humans.

Diabetic retinopathy, caused by type two diabetes, is the fastest-growing cause of preventable blindness. Each of the more than 415 million people living with the disease risks losing their eyesight unless they have regular access to doctors.

In countries like India there are simply too many patients for doctors to treat. There are 4,000 diabetic patients for every ophthalmologist in India, where the US has one for every 1,500 patients.

Read more

So much talk about AI and robots taking our jobs. Well, guess what, it’s already happening and the rate of change will only increase. I estimate that about 5% of jobs have been automated — both blue collar manufacturing jobs, as well as, this time, low-level white collar jobs — think back office, paralegals, etc. There’s a thing called RPA, or Robot Process Automation, which is hollowing out back office jobs at an alarming rate, using rules based algorithms and expert systems. This will rapidly change with the introduction of deep learning algorithms into these “robot automation” systems, making them intelligent, capable of making intuitive decisions and therefore replacing more highly skilled and creative jobs. So if we’re on an exponential curve, and we’ve managed to automate around 5% of jobs in the past six years, say, and the doubling is every two years, that means by 2030, almost all jobs will be automated. Remember, the exponential math means 1, 2, 4, 8, 16, 32, 64, 100%, with the doubling every two years.

We are definitely going to need a basic income to prevent people (doctors, lawyers, drivers, teachers, scientists, manufacturers, craftsmen) from going homeless once their jobs are automated away. This will need to be worked out at the government level — the sooner the better, because exponentials have a habit of creeping up on people and then surprising society with the intensity and rapidity of the disruptive change they bring. I’m confident that humanity can and will rise to the challenges ahead, and it is well to remember that economics is driven by technology, not the other way around. Education, as usual, is definitely the key to meeting these challenges head on and in a fully informed way. My only concern is when governments will actually start taking this situation seriously enough to start taking bold action. There certainly is no time like the present.

Read more