Toggle light / dark theme

The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.

ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.

Once datasets are cleaned, data visualization remodels them into intelligible graphics that put actionable insights on full display.


Join executives from July 26–28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!

Chances are you’ve heard the phrase “a picture is worth a thousand words.” What you may not know is that depending on the context, this can be somewhat of a misleading statement.

Hear us out. The human brain is hardwired to ingest images 60,000 times faster than text, accounting for 90% of the information we process every day being visual. These numbers make a convincing case as to why a picture deserves a little more credit than just a thousand words.

Normally, robotic arms are controlled by a GUI running on a host PC, or with some kind of analog system that maps human inputs to various degrees of rotation. However, Maurizio Miscio was able to build a custom robotic arm that is completely self-contained — thanks to a companion mobile app that resides on an old smartphone housed inside a control box.

Miscio started his project by making 3D models of each piece, most of which were 3D-printed. These included the gripper, various joints that each give a single axis of rotation, and a large circular base that acts as a stable platform on which the arm can spin. He then set to work attaching five servo motors onto each rotational axis, along with a single SG90 micro servo motor for the gripper. These motors were connected to an Arduino Uno that also had an HC-05 Bluetooth® serial module for external communication.

In order to operate the arm, Miscio developed a mobile app with the help of MIT App Inventor, which presents the user with a series of buttons that rotate a particular servo motor to the desired degree. The app even lets a series of motion be recorded and “played back” to the Uno over Bluetooth for repeated, accurate movements.

Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.

Daniele Filippetto and colleagues at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) developed the setup to automatically compensate for real-time changes to accelerator beams and other components, such as magnets. Their machine learning approach is also better than contemporary beam control systems at both understanding why things fail, and then using physics to formulate a response. A paper describing the research was published late last year in Nature Scientific Reports.

“We are trying to teach physics to a chip, while at the same time providing it with the wisdom and experience of a senior scientist operating the machine,” said Filippetto, a staff scientist at the Accelerator Technology & Applied Physics Division (ATAP) at Berkeley Lab and deputy director of the Berkeley Accelerator Controls and Instrumentation Program (BACI) program.

The biggest benchmarking data set to date for a machine learning technique designed with data privacy in mind has been released open source by researchers at the University of Michigan.

Called federated learning, the approach trains learning models on end-user devices, like smartphones and laptops, rather than requiring the transfer of private data to central servers.

“By training in-situ on data where it is generated, we can train on larger real-world data,” explained Fan Lai, U-M doctoral student in computer science and engineering, who presents the FedScale training environment at the International Conference on Machine Learning this week.