Toggle light / dark theme

Innovation.


The U.S. Air Force plans to have an operational combat drone by 2023. The service plans to build out a family of unmanned aircraft, known as Skyborg, capable of carrying weapons and actively participating in combat. The Air Force’s goal is to build up a large fleet of armed, sort-of disposable jets that don’t need conventional runways to take off and land.

The Air Force, according to Aviation Week & Space Technology, expects to have the first operational Skyborg aircraft ready by 2023. Skyborg will be available with both subsonic and supersonic engines, indicating both attack and fighter jet versions. The basic design (or designs) will likely be stealthy, carrying guided bombs, air defense suppression missiles, and air-to-air missiles inside internal weapons bays. Interesting, according to AvWeek, the Air Force is considering Skyborg as a replacement not only for the MQ-9 Reaper attack drone but early versions of the F-16 manned fighter.

Neurons, specialized cells that transmit nerve impulses, have long been known to be a vital element for the functioning of the human brain. Over the past century, however, neuroscience research has given rise to the false belief that neurons are the only cells that can process and learn information. This misconception or ‘neurocomputing dogma’ is far from true.

An is a different type of cell that has recently been found to do a lot more than merely fill up spaces between neurons, as researchers believed for over a century. Studies are finding that these cells also play key roles in brain functions, including learning and central pattern generation (CPG), which is the basis for critical rhythmic behaviors such as breathing and walking.

Although astrocytes are now known to underlie numerous brain functions, most existing inspired by the only target the structure and function of neurons. Aware of this gap in existing literature, researchers at Rutgers University are developing brain-inspired algorithms that also account for and replicate the functions of astrocytes. In a paper pre-published on arXiv and set to be presented at the ICONS 2020 Conference in July, they introduce a neuromorphic central pattern generator (CPG) modulated by artificial astrocytes that successfully entrained several rhythmic walking behaviors in their in-house robots.

The robot seen here can work almost 24–7, carrying out experiments by itself. The automated scientist – the first of its kind – can make its own decisions about which chemistry experiments to perform next, and has already discovered a new catalyst.

With humanoid dimensions, and working in a standard laboratory, it uses instruments much like a human does. Unlike a real person, however, this 400 kg robot has infinite patience, and works for 21.5 hours each day, pausing only to recharge its battery.

This new technology – reported in the journal Nature and featured on the front cover – is designed to tackle problems of a scale and complexity that are currently beyond our grasp. New drug formulations could be autonomously discovered, for example, by searching vast and unexplored chemical spaces.

Dimensionality reduction is an unsupervised learning technique.

Nevertheless, it can be used as a data transform pre-processing step for machine learning algorithms on classification and regression predictive modeling datasets with supervised learning algorithms.

There are many dimensionality reduction algorithms to choose from and no single best algorithm for all cases. Instead, it is a good idea to explore a range of dimensionality reduction algorithms and different configurations for each algorithm.

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (director of research, IBM); 2) Meeting STEM Education and Workforce Needs, chaired by Catherine Bessant (CTO, Bank of America), and 3) New Models of Engagement for Federal/National Laboratories in the Multi-Sector R&D Enterprise, chaired by Dr. A.N. Sreeram (SVP, CTO, Dow Corp.)

Yesterday, the full report (Recommendations For Strengthening American Leadership In Industries Of The Future) was issued and it is fascinating and wide-ranging. To give you a sense of the scope, here are three highlights taken from the executive summary of the full report:

Quantum information scientists have introduced a new method for machine learning classifications in quantum computing. The non-linear quantum kernels in a quantum binary classifier provide new insights for improving the accuracy of quantum machine learning, deemed able to outperform the current AI technology.

The research team led by Professor June-Koo Kevin Rhee from the School of Electrical Engineering, proposed a quantum classifier based on quantum state fidelity by using a different initial state and replacing the Hadamard classification with a swap test. Unlike the conventional approach, this method is expected to significantly enhance the classification tasks when the training dataset is small, by exploiting the quantum advantage in finding non-linear features in a large feature space.

Quantum machine learning holds promise as one of the imperative applications for quantum computing. In machine learning, one fundamental problem for a wide range of applications is classification, a task needed for recognizing patterns in labeled training data in order to assign a label to new, previously unseen data; and the kernel method has been an invaluable classification tool for identifying non-linear relationships in complex data.