Here’s a concept; you’re next open house showing is conducted by a robot.
UAE marketers get a first glimpse of the humanoid robot Pepper.
Like this feature on QC.
If you have trouble wrapping your mind around quantum physics, don’t worry — it’s even hard for supercomputers. The solution, according to researchers from Google, Harvard, Lawrence Berkeley National Laboratories and others? Why, use a quantum computer, of course. The team accurately predicted chemical reaction rates using a supercooled quantum circuit, a result that could lead to improved solar cells, batteries, flexible electronics and much more.
Chemical reactions are inherently quantum themselves — the team actually used a quote from Richard Feynman saying “nature isn’t classical, dammit.” The problem is that “molecular systems form highly entangled quantum superposition states, which require many classical computing resources in order to represent sufficiently high precision,” according to the Google Research blog. Computing the lowest energy state for propane, a relatively simple molecule, takes around ten days, for instance. That figure is required in order to get the reaction rate.
That’s where the “Xmon” supercooled qubit quantum computing circuit (shown above) comes in. The device, known as a “variational quantum eigensolver (VQE)” is the quantum equivalent of a classic neural network. The difference is that you train a classical neural circuit (like Google’s DeepMind AI) to model classical data, and train the VQE to model quantum data. “The quantum advantage of VQE is that quantum bits can efficiently represent the molecular wave function, whereas exponentially many classical bits would be required.”
Although BMI is nothing new; I never get tired of highlighting it.
Now the group has come up with a way for one person to control multiple robots.
The system works using one controller who watches the drones, while his thoughts are read using a computer.
Open the hood of just about any electronic gadget and you probably will find printed circuit boards (PCBs)—most often in a leaf-green color—studded with processing, memory, data-relaying, graphics, and other types of chips and components, all interconnected with a labyrinth of finely embossed wiring. By challenging the technology community to integrate the collective functions hosted by an entire PCB onto a device approaching the size of a single chip, DARPA’s newest program is making a bid to usher in a fresh dimension of technology miniaturization.
“We are trying to push the massive amount of integration you typically get on a printed circuit board down into an even more compact format,” said Dr. Daniel Green, manager of the new program, whose acronym, “CHIPS,” is itself a typographic feat of miniaturization; the program’s full name is the Common Heterogeneous Integration and Intellectual Property (IP) Reuse Strategies Program. “It’s not just a fun acronym,” Green said. “The program is all about devising a physical library of component chips, or chiplets, that we can assemble in a modular fashion.”
A primary driver of CHIPS is to develop a novel, industry-friendly architectural strategy for designing and building new generations of microsystems in which the time and energy it takes to move signals—that is, data—between chips is reduced by factors of tens or even hundreds. “This is increasingly important for the data-intensive processing that we have to do as the data sets we are dealing with get bigger and bigger,” Green said. Although the program does not specify applications, the new architectural strategy at the program’s heart could open new routes to computational efficiencies required for such feats as identifying objects and actions in real-time video feeds, real-time language translation, and coordinating motion on-the-fly among swarms of fast-moving unmanned aerial vehicles (UAVs).
Powered by developments in exponential technologies, the cost of housing, transportation, food, health care, entertainment, clothing, education and so on will fall, eventually approaching, believe it or not, zero.
People are concerned about how AI and robotics are taking jobs, destroying livelihoods, reducing our earning capacity, and subsequently destroying the economy.
In anticipation, countries like Canada, India and Finland are running experiments to pilot the idea of “universal basic income” — the unconditional provision of a regular sum of money from the government to support livelihood independent of employment.
But what people aren’t talking about, and what’s getting my attention, is a forthcoming rapid demonetization of the cost of living.
Mercedes-Benz’s CityPilot autonomous bus technology just got a real-world, long-range test drive on the streets and highways of the Netherlands. One of the company’s Future Bus vehicles successfully followed a 20km Bus Rapid Transit route between Amsterdam’s Schiphol airport and the nearby town of Haarlem, navigating through tight turns, intersections and pedestrian areas all without the need for human input.
The CityPilot platform is based on a version of Daimler’s Highway Pilot autonomous trucking technology adapted to handle the specific needs of a city bus. With GPS, radar and a dozen cameras built into the vehicle itself, the bus can recognize traffic signals, pedestrians and other obstacles. The bus has a top speed of 70km/h (or about 43 mph) and all that data taken together allows the bus to position itself within inches of bus stops or raised accessibility platforms.
Although regulations still require a human operator sit behind the wheel in case of an emergency, the vehicle’s intelligent systems make for a much smoother ride for everyone. Unlike other autonomous vehicles, the bus is actually connected to the city network so it can communicate directly with traffic lights and other city infrastructure. The camera systems can even scan the road for potholes, so buses can avoid rough patches on their next run or share that data back to the city.
If there’s one thing the world’s most valuable companies agree on, it’s that their future success hinges on artificial intelligence.
Google is continuing to invest heavily in deep learning at a time its head of machine learning, John Giannandrea, is calling the artificial intelligence spring (as opposed to the AI winter of earlier times). The company’s Founders’ letter this year mentions machine learning up to five times, leaving no doubt that it believes its advantages in this area will give it the edge in the coming years. In short, CEO Sundar Pichai wants to put artificial intelligence everywhere, and Google is marshaling its army of programmers into the task of remaking itself as a machine learning company from top to bottom.
[gallery2012].
About 5 years ago a friend of mine at Microsoft (Mitch S.) had a vision of making a new security model around drone swarms and a form of BMI technology. Glad to see the vision come true.
Scientists have discovered how to control multiple robotic drones using the human brain, an advance that can help develop swarms of search and rescue drones that are controlled just by thought.
A controller wears a skull cap outfitted with 128 electrodes wired to a computer. The device records electrical brain activity. If the controller moves a hand or thinks of something, certain areas light up. “I can see that activity from outside. Our goal is to decode that activity to control variables for the robots,” said Panagiotis Artemiadis, from the Arizona State University in the US. If the user is thinking about spreading the drones out, we know what part of the brain controls that thought, Artemiadis said.