Toggle light / dark theme

Dimensionality reduction is an unsupervised learning technique.

Nevertheless, it can be used as a data transform pre-processing step for machine learning algorithms on classification and regression predictive modeling datasets with supervised learning algorithms.

There are many dimensionality reduction algorithms to choose from and no single best algorithm for all cases. Instead, it is a good idea to explore a range of dimensionality reduction algorithms and different configurations for each algorithm.

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: 1) Industries of the Future (IotF), chaired be Dario Gil (director of research, IBM); 2) Meeting STEM Education and Workforce Needs, chaired by Catherine Bessant (CTO, Bank of America), and 3) New Models of Engagement for Federal/National Laboratories in the Multi-Sector R&D Enterprise, chaired by Dr. A.N. Sreeram (SVP, CTO, Dow Corp.)

Yesterday, the full report (Recommendations For Strengthening American Leadership In Industries Of The Future) was issued and it is fascinating and wide-ranging. To give you a sense of the scope, here are three highlights taken from the executive summary of the full report:

Quantum information scientists have introduced a new method for machine learning classifications in quantum computing. The non-linear quantum kernels in a quantum binary classifier provide new insights for improving the accuracy of quantum machine learning, deemed able to outperform the current AI technology.

The research team led by Professor June-Koo Kevin Rhee from the School of Electrical Engineering, proposed a quantum classifier based on quantum state fidelity by using a different initial state and replacing the Hadamard classification with a swap test. Unlike the conventional approach, this method is expected to significantly enhance the classification tasks when the training dataset is small, by exploiting the quantum advantage in finding non-linear features in a large feature space.

Quantum machine learning holds promise as one of the imperative applications for quantum computing. In machine learning, one fundamental problem for a wide range of applications is classification, a task needed for recognizing patterns in labeled training data in order to assign a label to new, previously unseen data; and the kernel method has been an invaluable classification tool for identifying non-linear relationships in complex data.

Astrophysicians have used AI to discover 250 new stars in the Milky Way, which they believe were born outside the galaxy.

Caltech researcher Lina Necib named the collection Nyx, after the Greek goddess of the night. She suspects the stars are remnants of a dwarf galaxy that merged with the Milky Way many moons ago.

To develop the AI, Necib and her team first tracked stars across a simulated galaxy created by the Feedback in Realistic Environments (FIRE) project. They labeled the stars as either born in the host galaxy, or formed through galaxy mergers. These labels were used to train a deep learning model to spot where a star was born.

SHANGHAI/BEIJING — U.S. electric vehicle maker Tesla Inc is “very close” to achieving level 5 autonomous driving technology, Chief Executive Elon Musk said on Thursday, referring to the capability to navigate roads without any driver input.

Musk added that he was confident Tesla would attain basic functionality of the technology this year, in remarks made via a video message at the opening of Shanghai’s annual World Artificial Intelligence Conference (WAIC).

The California-based automaker currently builds cars with an autopilot driver assistance system.