Toggle light / dark theme

BMI implant leveraging AI.


You probably clicked on this article because the idea of using brain implants to allow artificial intelligence (AI) to read your brain sounds futuristic and fascinating. It is fascinating, but it’s not as futuristic as you might think. Before we start talking about brain implants and how to augment the human brain using AI, we need to put some context around human intelligence and why we might want to tinker with it.

We floated the idea before that gene editing techniques could allow us to promote genetic intelligence by performing gene editing at the germline. That’s one approach. As controversial as it might be, some solid scientific research shows that genetics does play a role in intelligence. For those of us who are already alive and well, this sort of intelligence enhancement won’t work. This is where we might look towards augmented intelligence. This sort of augmentation of the brain will firstly be preventative in that it will look to assist those who have age associated brain disorders as an example. In order for augmented intelligence to be feasible though, we need a read/write interface to the human brain. One company called Kernel might be looking to address this with a technology that takes a page out of science fiction.

kernel logo

Read more

What if a simple algorithm were all it took to program tomorrow’s artificial intelligence to think like humans?

According to a paper published in the journal Frontiers in Systems Neuroscience, it may be that easy — or difficult. Are you a glass-half-full or half-empty kind of person?

Researchers behind the theory presented experimental evidence for the Theory of Connectivity — the theory that all of the brains processes are interconnected (massive oversimplification alert) — “that a simple mathematical logic underlies brain computation.” Simply put, an algorithm could map how the brain processes information. The painfully-long research paper describes groups of similar neurons forming multiple attachments meant to handle basic ideas or information. These groupings form what researchers call “functional connectivity motifs” (FCM), which are responsible for every possible combination of ideas.

Read more

When you see a photo of a dog bounding across the lawn, it’s pretty easy for us humans to imagine how the following moments played out. Well, scientists at MIT have just trained machines to do the same thing, with artificial intelligence software that can take a single image and use it to to create a short video of the seconds that followed. The technology is still bare-bones, but could one day make for smarter self-driving cars that are better prepared for the unexpected, among other applications.

The software uses a deep-learning algorithm that was trained on two million unlabeled videos amounting to a year’s worth of screen time. It actually consists of two separate neural networks that compete with one another. The first has been taught to separate the foreground and the background and to identify the object in the image, which allows the model to then determine what is moving and what isn’t.

According to the scientists, this approach improves on other computer vision technologies under development that can also create video of the future. These involve taking the information available in existing videos and stretching them out with computer-generated vision, by building each frame one at a time. The new software is claimed to be more accurate, by producing up to 32 frames per second and building out entire scenes in one go.

Read more

If you want to use one of today’s major VR headsets, whether the Oculus Rift, the HTC Vive, or the PS VR, you have to accept the fact that there will be an illusion-shattering cable that tethers you to the small supercomputer that’s powering your virtual world.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) may have a solution in MoVr, a wireless virtual reality system. Instead of using Wi-Fi or Bluetooth to transmit data, the research team’s MoVR system uses high-frequency millimeter wave radio to stream data from a computer to a headset wirelessly at dramatically faster speeds than traditional technology.

There have been a variety of approaches to solving this problem already. Smartphone-based headsets such as Google’s Daydream View and Samsung’s Gear VR allow for untethered VR by simply offloading the computational work directly to a phone inside the headset. Or the entire idea of VR backpacks, which allow for a more mobile VR experience by building a computer that’s more easily carried. But there are still a lot of limitations to either of these solutions.

Read more

Shift 2: Open-source code, Node, and frameworks

Once widely considered a toy language, Node has quickly taken over the web and fostered an incredible open-source community. For those who are unfamiliar, Node is a way for JavaScript to run on a server. What’s so incredible about Node is that the same developers who were only writing client-side code (front-end web development) can now write backend code without switching languages.

In addition, there is an incredible community that rallies around and thrives off of open-source contributions. The infrastructure and open-source packages are very powerful, allowing developers to not just solve their own problems, but also to build in a way that solves problems for the entire community. Building a software product with Node today is like playing with Lego blocks; you spend most of your time simply connecting them.

Read more

Intel is making a huge push into AI and deep learning, and intends to build custom variants of its Xeon Phi hardware to compete in these markets. Several months ago, the Santa Clara corporation bought Nervana, an AI startup, and this new announcement is seen as building on that momentum. AI and deep learning have become huge focuses of major companies in the past few years — Nvidia, Google, Microsoft, and a number of smaller firms are all jockeying for position, chasing breakthroughs, and building their own custom silicon solutions.

The upcoming Knights Mill is still pretty hazy, but Intel has stated that the chip will be up to 4x faster than existing Knights Landing hardware. Right now, the company is working on three separate forays into the AI / deep learning market. First up, there’s Lake Crest. This product is based on Nervana technology that existed prior to the Intel purchase. Nervana was working on an HBM-equipped chip with up to 32GB of memory, and that’s the product Intel is talking about rolling out to the wider market in the first half of 2017. Lake Crest will be followed by Knights Crest, a chip that takes Nervana’s technology and implements it side-by-side along with Intel Xeon processors.

“The technology innovations from Nervana will be optimized specifically for neural networks to deliver the highest performance for deep learning, as well as unprecedented compute density with high-bandwidth interconnect for seamless model parallelism,” Intel CEO Brian Krzanich wrote in a recent blog post. “We expect Nervana’s technologies to produce a breakthrough 100-fold increase in performance in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster.”

Read more