Toggle light / dark theme

“Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” said Robert Kent.


How can machine learning be improved to provide better efficiency in the future? This is what a recent study published in Nature Communications hopes to address as a team of researchers from The Ohio State University investigated the potential for controlling future machine learning products by creating digital twins (copies) that can be used to improve machine learning-based controllers that are currently being used in self-driving cars. However, these controllers require large amounts of computing power and are often challenging to use. This study holds the potential to help researchers better understand how future machine learning algorithms can exhibit better control and efficiency, thus improving their products.

“The problem with most machine learning-based controllers is that they use a lot of energy or power, and they take a long time to evaluate,” said Robert Kent, who is a graduate student in the Department of Physics at The Ohio State University and lead author of the study. “Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”

For the study, the researchers created a fingertip-sized digital twin that can function without the internet with the goal of improving the productivity and capabilities of a machine learning-based controller. In the end, the researchers discovered a decrease in the controller’s power needs due to a machine learning method known as reservoir computing, which involves reading in data and mapping out to the target location. According to the researchers, this new method can be used to simplify complex systems, including self-driving cars while decreasing the amount of power and energy required to run the system.

Microsoft is said to be building an OpenAI competitor despite its multi-billion-dollar partnership with the firm — and according to at least one insider, it’s using GPT-4 data to do so.

First reported by The Information, the new large language model (LLM) is apparently called MAI-1, and an inside source told the website that Microsoft is using GPT-4 and public information from the web to train it out.

MAI-1 may also be trained on datasets from Inflection, the startup previously run by Google DeepMind cofounder Mustafa Suleyman before he joined Microsoft as the CEO of its AI department earlier this year. When it hired Suleyman, Microsoft also brought over most of Inflection’s staff and folded them into Microsoft AI.

Join our newsletter to get the latest military space news every Tuesday by veteran defense journalist Sandra Erwin.

The demonstration is a key milestone in the Air Force Research Laboratory’s Defense Experimentation Using Commercial Space Internet, or DEUCSI — a program launched in 2018 to explore augmenting military communications by leveraging the growing commercial satellite internet industry.

A group of Japanese telecommunication firms have developed a high-speed 6G wireless gadget that can carry data at up to 20 times the speed of 5G.

The device can transmit data at 100 gigabits per second (Gbps), at distances up to 330 feet (100 meters).

Four firms, namely DOCOMO, NTT Corporation, NEC Corporation, and Fujitsu, formed a consortium for the project. Since 2021, these companies have collaborated on research and development concerning sub-terahertz devices, foreseeing the dawn of the 6G era.