According to Steve Jurvetson, venture capitalist and board member at pioneer quantum computing company D-WAVE (as well as others, such as Tesla and SpaceX), Google has what may be a “watershed” quantum computing announcement scheduled for early next month. This comes as D-WAVE, which notably also holds the Mountain View company as a customer, has just sold a 1000+ Qubit 2X quantum computer to national security research institution Los Alamos…
It’s not exactly clear what this announcement will be (besides important for the future of computing), but Jurvetson says to “stay tuned” for more information coming on December 8th. This is the first we’ve heard of a December 8th date for a Google announcement, and considering its purported potential to be a turning point in computing, this could perhaps mean an actual event is in the cards.
Notably, Google earlier this year entered a new deal with NASA and D-WAVE to continue its research in quantum computing. D-WAVE’s press release at the time had this to say:
Oh Japan, how I love your beautiful insanity. wink
If you’re a fan of virtual musicians with computer-generated bodies and voices, and you live in North America, then do I have news for you.
Hatsune Miku, Japan’s “virtual pop star,” is coming to the US and Canada next year for a seven-city, synth-filled tour—her first tour in this neck of the woods. Miku herself may be a digital illusion, but her unique impact on the music industry is very real.
Her bio describes her to her 2.5 million Facebook fans as “a virtual singer who can sing any song that anybody composes.” She debuted in 2007 as software called Vocaloid developed by Crypton Future Media, a Sapporo-based music technology company. Vocaloid software generates a human-sounding singing voice, but without any actual humans.
Yes, conceivably. And if/when we achieve the levels of technology necessary for simulation, the universe will become our playground. Eagleman’s latest book is “The Brain: The Story of You” (http://goo.gl/2IgDRb).
Transcript — The big picture in modern neuroscience is that you are the sum total of all the pieces and parts of your brain. It’s a vastly complicated network of neurons, almost 100 billion neurons, each of which has 10,000 connections to its neighbors. So we’re talking a thousand trillion neurons. It’s a system of such complexity that it bankrupts our language. But, fundamentally it’s only three pounds and we’ve got it cornered and it’s right there and it’s a physical system.
The computational hypothesis of brain function suggests that the physical wetware isn’t the stuff that matters. It’s what are the algorithms that are running on top of the wetware. In other words: What is the brain actually doing? What’s it implementing software-wise that matters? Hypothetically we should be able to take the physical stuff of the brain and reproduce what it’s doing. In other words, reproduce its software on other substrates. So we could take your brain and reproduce it out of beer cans and tennis balls and it would still run just fine. And if we said hey, “How are you feeling in there?” This beer can/tennis ball machine would say “Oh, I’m feeling fine. It’s a little cold, whatever.”
It’s also hypothetically a possibility that we could copy your brain and reproduce it in silica, which means on a computer at zeroes and ones, actually run the simulation of your brain. The challenges of reproducing a brain can’t be underestimated. It would take something like a zettabyte of computational capacity to run a simulation of a human brain. And that is the entire computational capacity of our planet right now.
There’s a lot of debate about whether we’ll get to a simulation of the human brain in 50 years or 500 years, but those would probably be the bounds. It’s going to happen somewhere in there. It opens up the whole universe for us because, you know, these meat puppets that we come to the table with aren’t any good for interstellar travel. But if we could, you know, put you on a flash drive or whatever the equivalent of that is a century from now and launch you into outer space and your consciousness could be there, that could get us to other solar systems and other galaxies. We will really be entering an era of post-humanism or trans-humanism at that point.
“For those struggling to understand what Apple is up to, it might be best to imagine the Apple logo as a giant, rose gold-colored apple sculpture that’s being polished beyond perfection, to some sort of ideal, a level of quality that is so undeniable that no competitor dares forget it.”
Posted by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead.
Deep Learning has had a huge impact on computer science, making it possible to explore new frontiers of research and to develop amazingly useful products that millions of people use every day. Our internal deep learning infrastructure DistBelief, developed in 2011, has allowed Googlers to build ever larger neural networks and scale training to thousands of cores in our datacenters. We’ve used it to demonstrate that concepts like “cat” can be learned from unlabeled YouTube images, to improve speech recognition in the Google app by 25%, and to build image search in Google Photos. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, and drove our experiments in automated image captioning as well as DeepDream.
While DistBelief was very successful, it had some limitations. It was narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally.
Major technological changes have a transformative effect on every aspect of human life. Increasingly intelligent programs are responsible to paradigm shifts at a steadily accelerating rate, a trend which acceleration theories suggest is all but guaranteed to continue.
We explore some of the most disruptive applications of artificial intelligence, examining in particular the impact of computer trading programs (algotraders) on stock markets. We explore some such imminent technologies (such as autonomous military robots) and their consequences (eg on job markets). We conclude with a discussion in the potentially irreversible consequences of this trend, including that of superintelligence.
The brain is a great information processor, but one that doesn’t care about where information comes from.
Sight, scent, taste, sound, touch — all of our precious senses, once communicated to the brain, are transformed into simple electrical pulses. Although we consciously perceive the world through light rays and sound waves, the computing that supports those experiences is all one tone — electrical.
Simply put, all of our senses are the same to our brain.
Facebook is now tackling a problem that has evaded computer scientists for decades: how to build software that can beat humans at Go, the 2,500-year-old strategy board game, according to a report today from Wired. Because of Go’s structure — you place black or white stones at the intersection of lines on a 19-by-19 grid — the game has more possible permutations than chess, despite its simple ruleset. The number of possible arrangements makes it difficult to design systems that can look far enough into the future to adequately assess a good play in the way humans can.
“We’re pretty sure the best [human] players end up looking at visual patterns, looking at the visuals of the board to help them understand what are good and bad configurations in an intuitive way,” Facebook chief technology officer Mike Schroepfer said. “So, we’ve taken some of the basics of game-playing AI and attached a visual system to it, so that we’re using the patterns on the board—a visual recognition] system—to tune the possible moves the system can make.”
The project is part of Facebook’s broader efforts in so-called deep learning. That subfield of artificial intelligence is founded on the idea that replicating the way the human brain works can unlock statistical and probabilistic capabilities far beyond the capacity of modern-day computers. Facebook wants to advance its deep learning techniques for wide-ranging uses within its social network. For instance, Facebook is building a version of its website for the visually impaired that will use natural language processing to take audio input from users — “what object is the person in the photo holding?” — analyze it, and respond with relevant information. Facebook’s virtual assistant, M, will also come to rely on this type of technology to analyze and learn from users’ requests and respond in a way only humans could.