Toggle light / dark theme

Why the Rise of AI Makes Human Intelligence More Valuable Than Ever

In the popular TV show Sherlock, visual depictions of our hero’s deductive reasoning often look like machine algorithms. And probably not by accident, given that this version of Conan Doyle’s detective processes tremendous amounts of observed data—the sort of minutiae that the average person tends to pass over or forget—more like a computer than a human.

Sherlock’s intelligence is both strength and limitation. His way of thinking is often bounded by an inability to intuitively understand social and emotional contexts. The show’s central premise is that Sherlock Holmes needs his friend John Watson to help him synthesize empirical data into human truth.

In Sherlock we see the analog for modern AI: highly performant learning machines that can achieve metacognitive results with the assistance of fully cognitive human partners. Machine intelligence does not by its nature make human intelligence obsolete. Quite the opposite, really—machines need human guidance.

Tech world debate on robots and jobs heats up

Are robots coming for your job?

Although technology has long affected the labor force, recent advances in and robotics are heightening concerns about automation replacing a growing number of occupations, including highly skilled or “knowledge-based” .

Just a few examples: self-driving technology may eliminate the need for taxi, Uber and truck drivers, algorithms are playing a growing role in journalism, robots are informing consumers as mall greeters, and medicine is adapting robotic surgery and artificial intelligence to detect cancer and heart conditions.

The rise of the useless class

Historian Yuval Noah Harari makes a bracing prediction: just as mass industrialization created the working class, the AI revolution will create a new unworking class.

The most important question in 21st-century economics may well be: What should we do with all the superfluous people, once we have highly intelligent non-conscious algorithms that can do almost everything better than humans?

This is not an entirely new question. People have long feared that mechanization might cause mass unemployment. This never happened, because as old professions became obsolete, new professions evolved, and there was always something humans could do better than machines. Yet this is not a law of nature, and nothing guarantees it will continue to be like that in the future. The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarized in three simple principles:

Nanoscale logic machines go beyond binary computing

(Phys.org)—Scientists have built tiny logic machines out of single atoms that operate completely differently than conventional logic devices do. Instead of relying on the binary switching paradigm like that used by transistors in today’s computers, the new nanoscale logic machines physically simulate the problems and take advantage of the inherent randomness that governs the behavior of physical systems at the nanoscale—randomness that is usually considered a drawback.

The team of researchers, Barbara Fresch et al., from universities in Belgium, Italy, Australia, Israel, and the US, have published a paper on the new nanoscale logic machines in a recent issue of Nano Letters.

“Our approach shows the possibility of a new class of tiny analog computers that can solve computationally difficult problems by simple statistical algorithms running in nanoscale solid-state physical devices,” coauthor Francoise Remacle at the University of Liege told Phys.org.

Artificial intelligence has a multitasking problem, and DeepMind might have a solution

Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.

Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.

The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.

Scientists Want to Build a Super-Fast, Self-Replicating Computer That “Grows as It Computes”

Scientists say it’s possible to build a new type of self-replicating computer that replaces silicon chips with processors made from DNA molecules, and it would be faster than any other form of computer ever proposed — even quantum computers.

Called a nondeterministic universal Turing machine (NUTM), it’s predicted that the technology could execute all possible algorithms at once by taking advantage of DNA’s ability to replicate almost perfect copies of itself over billions of years.

The basic idea is that our current electronic computers are based on a finite number of silicon chips, and we’re fast approaching the limit for how many we can actually fit in our machines.

A Libertarian Transhumanist’s Take on the Future of Taxes

My new article for Psychology Today on my federal audit and the coming day of eliminating taxes because of technology.


With all that in mind—and the $7500 they say I owe them—they know I wouldn’t hire an accountant at $150 an hour to deal with the thousand-plus receipts, payments, and supposed car log entries I made last year—since the amount I’d spend on an accountant in the San Francisco Bay Area might easily end up more than $7500. They also surely know I won’t do it myself, since it’s definitely not worth my own time.

They have me in a pickle—even though it’s more than obvious my busy self probably has far more in write-offs than I even bothered to report in the first place. In fact—given how perturbed I feel at the IRS and its 82,000 full time employees this moment, if it was just economical, I’d re-file to get more of my earnings back. But in the twisted game they created in their 74,000+ page tax code, it’s not worth it.

Their way of operating has made me suspect one of their main algorithms on whether to audit people resembles mafia morality: