Toggle light / dark theme

Biomimetic insights for machine consciousness

A diagrammatic explanation of how machine consciousness might be feasible.


About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

Read more

Here’s Why AI Can’t Solve Everything

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity.

Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as “AI solutionism”. This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity’s problems.

Read more

[1805.03035] Time travel in vacuum spacetimes

The possibility of time travel through the geodesics of vacuum solutions in first order gravity is explored. We present explicit examples of such geometries, which contain degenerate as well as nondegenerate tetrad fields that are sewn together continuously over different regions of the spacetime.

These classical solutions to the field equations satisfy the energy conditions.

Read more

Google and Coursera launch a new machine learning specialization

Over the last few years, Google and Coursera have regularly teamed up to launch a number of online courses for developers and IT pros. Among those was the Machine Learning Crash course, which provides developers with an introduction to machine learning. Now, building on that, the two companies are launching a machine learning specialization on Coursera. This new specialization, which consists of five courses, has an even more practical focus.

The new specialization, called “Machine Learning with TensorFlow on Google Cloud Platform,” has students build real-world machine learning models. It takes them from setting up their environment to learning how to create and sanitize datasets to writing distributed models in TensorFlow, improving the accuracy of those models and tuning them to find the right parameters.

As Google’s Big Data and Machine Learning Tech Lead Lak Lakshmanan told me, his team heard that students and companies really liked the original machine learning course but wanted an option to dig deeper into the material. Students wanted to know not just how to build a basic model but also how to then use it in production in the cloud, for example, or how to build the data pipeline for it and figure out how to tune the parameters to get better results.

Read more

Where Humans Meet Machines: Intuition, Expertise and Learning

Professor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.


P rofessor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.

Erik Brynjolfsson: We heard today about algorithmic bias and about human biases. You are one of the world’s experts on human biases, and you’re writing a new book on the topic. What are the bigger risks — human or the algorithmic biases?

Daniel Kahneman: It’s pretty obvious that it would be human biases, because you can trace and analyze algorithms.

Read more

An AI Created New Doom Levels That Are as Fun as the Game’s Original Ones

The technical skills of programmer John Carmack helped create the 3D world of Doom, the first-person shooter that took over the world 25 years ago. But it was level designers like John Romero and American McGee that made the game fun to play. Level designers that, today, might find their jobs threatened by the ever-growing capabilities of artificial intelligence.

One of the many reasons Doom became so incredibly popular was that id Software made tools available that let anyone create their own levels for the game, resulting in thousands of free ways to add to its replay value. First-person 3D games and their level design have advanced by leaps and bounds since the original Doom’s release, but the sheer volume of user-created content made it the ideal game for training an AI to create its own levels.

Researchers at the Politecnico di Milano university in Italy created a generative adversarial network for the task, which essentially uses two artificially intelligent algorithms working against each other to optimise the overall results. One algorithm was fed thousands of Doom levels which it analysed for criteria like overall size, enemy placement, and the number of rooms. It then used what it learned to generate its own original Doom levels.

Read more

This physicist’s ideas of time will blow your mind

Time feels real to people. But it doesn’t even exist, according to quantum physics. “There is no time variable in the fundamental equations that describe the world,” theoretical physicist Carlo Rovelli tells Quartz.

If you met him socially, Rovelli wouldn’t assault you with abstractions and math to prove this point. He’d “rather not ruin a party with physics,” he says. We don’t have to understand the mechanics of the universe to go about our daily lives. But it’s good to take a step back every once in a while.

“Time is a fascinating topic because it touches our deepest emotions. Time opens up life and takes everything away. Wondering about time is wondering about the very sense of our life. This is [why] I have spent my life studying time,” Rovelli explains.

Read more

Selfish Ledger: Google’s mass sociology experiment

Check out the internal Google film, “The Selfish Ledger”. This probably wasn’t meant to slip onto a public web server, and so I have embedded a backup copy below. Ping me if it disappears. I will locate a permanent URL.

This 8½ minute video is a lot deeper—and possibly more insipid—than it appears. Nick Foster may be the Anti-Christ, or perhaps the most brilliant sociologist of modern times. It depends on your vantage point, and your belief in the potential of user controls and cat-in-bag containment.

He talks of a species propelling itself toward “desirable goals” by cataloging, data mining, and analyzing the past behavior of peers and ancestors—and then using that data to improve the experience of each user’s future and perhaps even their future generations. But, is he referring to shared goals across cultures, sexes and incomes? Who controls the algorithms and the goal filters?! Is Google the judge, arbiter and God?

Consider these quotes from the video. Do they disturb you? The last one sends a chill down my spine. But, I may be overreacting to what is simply an unexplored frontier. The next generation in AI. I cannot readily determine if it ushers in an era of good or bad:

  • Behavioral sequencing « a phrase used throughout the video
  • Viewing human behavior through a Lemarkian lens
  • An individual is just a carrier for the gene. The gene seeks to improve itself and not its host
  • And [at 7:25]: “The mass multigenerational examination of actions and results could introduce a model of behavioral sequencing.”

There’s that odd term again: behavioral sequencing. It suggests that we are mice and that Google can help us to act in unison toward society’s ideal goals.

Today, Fortune Magazine described it this way: “Total and absolute data collection could be used to shape the decisions you make … The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit.”

The statements could apply just as easily to the NSA as it does to Google. At least we are entering into a bargain with Google. We hand them data and they had us numerous benefits (the same benefits that many users often overlook). Yet, clearly, this is heavy duty stuff—especially for the company that knows everything about everyone. Watch it a second time. Think carefully about the power that Google wields.

Don’t get me wrong. I may be in the minority, but I generally trust Google. I recognize that I am raw material and not a client. I accept the tradeoff that I make when I use Gmail, web search, navigate to a destination or share documents. I benefit from this bargain as Google matches my behavior with improved filtering of marketing directed at me.

But, in the back of my mind, I hope for the day that Google implements Blind Signaling and Response, so that my data can only be used in ways that were disclosed to me—and that strengthen and defend that bargain, without subjecting my behavior, relationships and predilections to hacking, misuse, or accidental disclosure.


Philip Raymond sits on Lifeboat’s New Money Systems board. He co-chairs CRYPSA, hosts the Bitcoin Event, publishes Wild Duck and is keynote speaker at global Cryptocurrency Conferences. Book a presentation or consulting engagement.

Credit for snagging this video: Vlad Savov @ TheVerge