Menu

Blog

Archive for the ‘robotics/AI’ category: Page 2116

Nov 28, 2016

Researchers may have uncovered an algorithm that explains intelligence

Posted by in categories: information science, mathematics, neuroscience, robotics/AI

What if a simple algorithm were all it took to program tomorrow’s artificial intelligence to think like humans?

According to a paper published in the journal Frontiers in Systems Neuroscience, it may be that easy — or difficult. Are you a glass-half-full or half-empty kind of person?

Researchers behind the theory presented experimental evidence for the Theory of Connectivity — the theory that all of the brains processes are interconnected (massive oversimplification alert) — “that a simple mathematical logic underlies brain computation.” Simply put, an algorithm could map how the brain processes information. The painfully-long research paper describes groups of similar neurons forming multiple attachments meant to handle basic ideas or information. These groupings form what researchers call “functional connectivity motifs” (FCM), which are responsible for every possible combination of ideas.

Continue reading “Researchers may have uncovered an algorithm that explains intelligence” »

Nov 28, 2016

MIT’s deep-learning software produces videos of the future

Posted by in categories: information science, robotics/AI, transportation

When you see a photo of a dog bounding across the lawn, it’s pretty easy for us humans to imagine how the following moments played out. Well, scientists at MIT have just trained machines to do the same thing, with artificial intelligence software that can take a single image and use it to to create a short video of the seconds that followed. The technology is still bare-bones, but could one day make for smarter self-driving cars that are better prepared for the unexpected, among other applications.

The software uses a deep-learning algorithm that was trained on two million unlabeled videos amounting to a year’s worth of screen time. It actually consists of two separate neural networks that compete with one another. The first has been taught to separate the foreground and the background and to identify the object in the image, which allows the model to then determine what is moving and what isn’t.

Continue reading “MIT’s deep-learning software produces videos of the future” »

Nov 28, 2016

MIT’s new method of radio transmission could one day make wireless VR a reality

Posted by in categories: internet, mobile phones, robotics/AI, supercomputing, virtual reality

If you want to use one of today’s major VR headsets, whether the Oculus Rift, the HTC Vive, or the PS VR, you have to accept the fact that there will be an illusion-shattering cable that tethers you to the small supercomputer that’s powering your virtual world.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) may have a solution in MoVr, a wireless virtual reality system. Instead of using Wi-Fi or Bluetooth to transmit data, the research team’s MoVR system uses high-frequency millimeter wave radio to stream data from a computer to a headset wirelessly at dramatically faster speeds than traditional technology.

There have been a variety of approaches to solving this problem already. Smartphone-based headsets such as Google’s Daydream View and Samsung’s Gear VR allow for untethered VR by simply offloading the computational work directly to a phone inside the headset. Or the entire idea of VR backpacks, which allow for a more mobile VR experience by building a computer that’s more easily carried. But there are still a lot of limitations to either of these solutions.

Continue reading “MIT’s new method of radio transmission could one day make wireless VR a reality” »

Nov 28, 2016

Genevieve Bell: ‘Humanity’s greatest fear is about being irrelevant’

Posted by in categories: information science, robotics/AI

The Australian anthropologist explains why being scared about AI and big data has more to do with our fear of each other than killer robots.

Read more

Nov 27, 2016

Google’s AI Can Now Translate Between Languages It Wasn’t Taught to Translate Between

Posted by in category: robotics/AI

In Brief

  • The AI can translate a language pair with a reasonable amount of accuracy if it has translated both of them into another common language.
  • This removes a significant amount of human input, and it opens the door to AI that learn and problem solve better than ever.

Read more

Nov 27, 2016

What happens when bots start writing code instead of humans

Posted by in categories: internet, robotics/AI

Shift 2: Open-source code, Node, and frameworks

Once widely considered a toy language, Node has quickly taken over the web and fostered an incredible open-source community. For those who are unfamiliar, Node is a way for JavaScript to run on a server. What’s so incredible about Node is that the same developers who were only writing client-side code (front-end web development) can now write backend code without switching languages.

In addition, there is an incredible community that rallies around and thrives off of open-source contributions. The infrastructure and open-source packages are very powerful, allowing developers to not just solve their own problems, but also to build in a way that solves problems for the entire community. Building a software product with Node today is like playing with Lego blocks; you spend most of your time simply connecting them.

Continue reading “What happens when bots start writing code instead of humans” »

Nov 27, 2016

Intel announces major AI push with upcoming Knights Mill Xeon Phi, custom silicon

Posted by in categories: innovation, robotics/AI

Intel is making a huge push into AI and deep learning, and intends to build custom variants of its Xeon Phi hardware to compete in these markets. Several months ago, the Santa Clara corporation bought Nervana, an AI startup, and this new announcement is seen as building on that momentum. AI and deep learning have become huge focuses of major companies in the past few years — Nvidia, Google, Microsoft, and a number of smaller firms are all jockeying for position, chasing breakthroughs, and building their own custom silicon solutions.

The upcoming Knights Mill is still pretty hazy, but Intel has stated that the chip will be up to 4x faster than existing Knights Landing hardware. Right now, the company is working on three separate forays into the AI / deep learning market. First up, there’s Lake Crest. This product is based on Nervana technology that existed prior to the Intel purchase. Nervana was working on an HBM-equipped chip with up to 32GB of memory, and that’s the product Intel is talking about rolling out to the wider market in the first half of 2017. Lake Crest will be followed by Knights Crest, a chip that takes Nervana’s technology and implements it side-by-side along with Intel Xeon processors.

“The technology innovations from Nervana will be optimized specifically for neural networks to deliver the highest performance for deep learning, as well as unprecedented compute density with high-bandwidth interconnect for seamless model parallelism,” Intel CEO Brian Krzanich wrote in a recent blog post. “We expect Nervana’s technologies to produce a breakthrough 100-fold increase in performance in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster.”

Continue reading “Intel announces major AI push with upcoming Knights Mill Xeon Phi, custom silicon” »

Nov 27, 2016

CERN introduces Large Hadron Collider’s robotic inspectors

Posted by in categories: particle physics, robotics/AI, transportation

Since the Large Hadron Collider (LHC) needs to be in tip-top shape to discover new particles, it has two inspectors making sure everything’s in working order. The two of them are called TIM, short not for Timothy, but for Train Inspection Monorail. These mini autonomous monorails that keep an eye on the world’s largest particle collider follow a pre-defined route and get around using tracks suspended from the ceiling. According to CERN’s post introducing the machines, the tracks are remnants from the time the tunnel housed the Large Electron Positron instead of the LHC. The LEP’s monorail was bigger, but not quite as high-tech: it was mainly used to transport materials and workers.

As for what the machines can do, the answer is “quite a few.” They can monitor the tunnel’s structure, oxygen percentage, temperature and communication bandwidth in real time. Both TIMs can also take visual and infrared images, as well as pull small wagons behind them if needed. You can watch them in action below — as you can see, they’re not much to look at with their boxy silver appearance. But without them, it’ll be tough monitoring a massive circular tunnel with a 17-mile circumference.

Continue reading “CERN introduces Large Hadron Collider’s robotic inspectors” »

Nov 27, 2016

Artificial Humans Could Be Even More Realistic With These New Nylon Muscles

Posted by in categories: biotech/medical, robotics/AI

Scientists have developed a new type of artificial muscle fibre based on nylon, which could one day render our future robot companions more realistic than ever.

Unlike previous synthetic muscles, this technology is cheap and simple to produce, which makes it a better option if we want our droids to be able to flex, move, and repair themselves in much the same way as flesh-and-blood people.

Continue reading “Artificial Humans Could Be Even More Realistic With These New Nylon Muscles” »

Nov 27, 2016

Artificial intelligence and robotics are revolutionising business

Posted by in categories: business, robotics/AI

And leading the way is the online grocery store Ocado.

Read more