Toggle light / dark theme

Wearable brain-machine interface turns intentions into actions

A new wearable brain-machine interface (BMI) system could improve the quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome—when a person is fully conscious but unable to move or communicate.

A multi-institutional, international team of researchers led by the lab of Woon-Hong Yeo at the Georgia Institute of Technology combined wireless soft scalp electronics and virtual reality in a BMI system that allows the user to imagine an action and wirelessly control a wheelchair or robotic arm.

The team, which included researchers from the University of Kent (United Kingdom) and Yonsei University (Republic of Korea), describes the new motor imagery-based BMI system this month in the journal Advanced Science.

DeepMind’s AlphaFold2 Predicts Protein Structures with Atomic-Level Accuracy

The prediction of protein structures from amino acid sequence information alone, known as the “protein folding problem,” has been an important open research question for more than 50 years. In the fall of 2020, DeepMind’s neural network model AlphaFold took a huge leap forward in solving this problem, outperforming some 100 other teams in the Critical Assessment of Structure Prediction (CASP) challenge, regarded as the gold-standard accuracy assessment for protein structure prediction. The success of the novel approach is considered a milestone in protein structure prediction.

This week, the DeepMind paper Highly Accurate Protein Structure Prediction with AlphaFold was published in the prestigious scientific journal Nature. The paper introduces AlphaFold2, a completely redesigned and open-sourced model that can predict protein structures with atomic-level accuracy.

Although machine learning researchers have long sought to develop computational methods for predicting 3D protein structures from protein sequences, there had been limited progress along this path, chiefly due to the computational intractability of molecular simulation, the context-dependence of protein stability, and the difficulty of producing sufficiently accurate models for protein physics.

Effectively using GPT-J and GPT-Neo, the GPT-3 open-source alternatives, with few-shot learning

GPT-J and GPT-Neo, the open-source alternatives to GPT-3, are among the best NLP models as of this writing. But using them effectively can take practice. Few-shot learning is an NLP technique that works very well with these models.

GPT-J and GPT-Neo.

GPT-Neo and GPT-J are both open-source NLP models, created by EleutherAI (a collective of researchers working to open source AI).

Finally: Here’s American New 6th Generation Fighter Jet

The future of fighter jets is coming, seemingly with more international power and disruptive technologies than predicted. As the US forges ahead to become 1st nation field sixth-generation fighter jet, other major air forces fear falling behind in the competitive race. The US, Europe, Japan, and China have made unbelievable investments looking for unique next-level capabilities like stealth, robust avionics, and navigation systems to present the most technologically advanced fighter jet. But one particular trend is crucial for all 6th gen prototypes. Artificial Intelligence is about to begin a new era of air combat.

Nvidia releases TensorRT 8 for faster AI inference

Nvidia today announced the release of TensorRT 8, the latest version of its software development kit (SDK) designed for AI and machine learning inference. Built for deploying AI models that can power search engines, ad recommendations, chatbots, and more, Nvidia claims that TensorRT 8 cuts inference time in half for language queries compared with the previous release of TensorRT.

Models are growing increasingly complex, and demand is on the rise for real-time deep learning applications. According to a recent O’Reilly survey, 86.7% of organizations are now considering, evaluating, or putting into production AI products. And Deloitte reports that 53% of enterprises adopting AI spent more than $20 million in 2019 and 2020 on technology and talent.

TensorRT essentially dials a model’s mathematical coordinates to a balance of the smallest model size with the highest accuracy for the system it’ll run on. Nvidia claims that TensorRT-based apps perform up to 40 times faster than CPU-only platforms during inference, and that TensorRT 8-specific optimizations allow BERT-Large — one of the most popular Transformer-based models — to run in 1.2 milliseconds.

Untether AI nabs $125M for AI acceleration chips

Untether AI, a startup developing custom-built chips for AI inferencing workloads, today announced it has raised $125 million from Tracker Capital Management and Intel Capital. The round, which was oversubscribed and included participation from Canada Pension Plan Investment Board and Radical Ventures, will be used to support customer expansion.

Increased use of AI — along with the technology’s hardware requirements — poses a challenge for traditional datacenter compute architectures. Untether is among the companies proposing at-memory or near-memory computation as a solution. Essentially, this type of hardware builds memory and logic into an integrated circuit package. In a “2.5D” near-memory compute architecture, processor dies are stacked atop an interposer that links the components and the board, incorporating high-speed memory to bolster chip bandwidth.

Founded in 2018 by CTO Martin Snelgrove, Darrick Wiebe, and Raymond Chik, Untether says it continues to make progress toward mass-producing its RunA1200 chip, which boasts efficiency with computational robustness. Snelgrove and Wiebe claim that data in their architecture moves up to 1000 times faster than is typical, which would be a boon for machine learning, where datasets are frequently dozens or hundreds of gigabytes in size.