Toggle light / dark theme

NVIDIA lined up quite a few partners at CES this year, including Audi and Mercedes, to use its powerful upcoming Xavier chip in autonomous vehicles. But days ago, Intel bought MobilEye for $15 billion to develop self-driving software and hardware to use across auto brands. To compete, automotive supplier Bosch announced a partnership today with the graphics chip maker to collaborate on an AI-powered self-driving computer intended for mass-market cars.

MobilEye corners about 70 percent of the market to supply integrated cameras, chips and software for advanced driver assistance systems (ADAS). As Bosch directly competes with the company, the NVIDIA partnership is a deeper commitment to continue building their tech in-house. The graphics chip maker introduced its upcoming Xavier processor to power the self-driving systems of tomorrow back at CES, but partnering with the automotive component giant can help get the chip into automakers’ cars at scale. The companies are aiming to release their self-driving computer system in 2020, according to Reuters.

Read more

The workplace is going to look drastically different ten years from now. The coming of the Second Machine Age is quickly bringing massive changes along with it. Manual jobs, such as lorry driving or house building are being replaced by robotic automation, and accountants, lawyers, doctors and financial advisers are being supplemented and replaced by high level artificial intelligence (AI) systems.

So what do we need to learn today about the jobs of tomorrow? Two things are clear. The robots and computers of the future will be based on a degree of complexity that will be impossible to teach to the general population in a few short years of compulsory education. And some of the most important skills people will need to work with robots will not be the things they learn in computing class.

There is little doubt that the workforce of tomorrow will need a different set of skills in order to know how to navigate a new world of work. Current approaches for preparing young people for the digital economy are based on teaching programming and computational thinking. However, it looks like human workers will not be replaced by automation, but rather workers will work alongside robots. If this is the case, it will be essential that human/robot teams draw on each other’s strengths.

Read more

Billionaire entrepreneur Mark Cuban’s prediction for the future of the workforce includes more robots and less human workers.

“We’re about to go into a period with artificial intelligence, machine learning, deep learning, those things where we literally are going to see a change in the nature of employment,” Cuban said in an interview with CNN’s Jake Tapper.

In that same interview, he criticized President Trump’s leadership skills before calling Trump “technologically illiterate.”

Read more

A few ideas on self-awareness and self-aware AIs.


I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you’re ‘giving life to someone else’. Before you make them, there’s no one to give anything to. You’re not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

In the next few years Space X and Virgin Galactic will be sending tourists into orbit and during a brainstorming session for last years SpaceApps Challenge we brainstormed some possible applications for Space Robots.

Last night on the International Space Station Astronaut Thomas Pesquet showed the SPHERES robots testing software that will be used to clean up space junk. Smaller versions of these robots could be developed with multiple ports for a Go Pro Camera linked to a SmartWatch app for Space Selfies or for a Virtual Reality 360 degree recording for the Tourists of their trip. Having wireframed for the Samsung Gear Watch App to be used on the International Space Station and with the advances in technology its easy to see how Siri/ Cortana/ Alexa could be incorporated into a SPHERE type Astromechanical robot to advise of Comms, Timetable scheduling and the other apps that are required for day to day use on the International Space Station. Fun applications that we came up with for the Space Apps challenge was a version of Space- Quidditch and Jedi Training for a SPHERE robot fitted with mini propulsion tanks.

The Annual SpaceApps Challenge is a great way of streching your tech skills and learning new ones. If you would like to host a SpaceApps event the deadline is today:

Read more

This is nowhere near the power of the biggest systems, but still allows us to participate in research and development powered by supercomputer.

The idea that a computer could deliver an increase in life expectancy arises for a number of reasons, Prof Desplat says. Major gains are expected from the emergence of personalised medicine, care specifically tailored to match your genetic make-up. This will be driven in the not too distant future by “deep artificial intelligence learning” run on a supercomputer. These will also deliver faster more accurate early diagnosis, he says.

These computers are used in a variety of ways, from weather forecasting and climate modelling to energy usage modelling, statistical processing and seismic analysis when prospecting for oil and gas.

Read more

Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.

Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.

The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.

Read more