Toggle light / dark theme

New 6G Networks Are in the Works. Can They Destroy Dead Zones for Good?

This summer the federal government took steps to boost connectivity by expanding existing broadband infrastructure. In late June the Biden administration announced a $42.45 billion commitment to the Broadband Equity, Access, and Deployment (BEAD) program, a federal initiative to provide all U.S. residents with reliable high-speed Internet access. The project emphasizes broadband connectivity, but some researchers suggest a more powerful cellular connection could eventually sidestep the need for wired Internet.

The 6G network is so early in its development that it is still not even clear how fast that network will be. Each new generation of wireless technology is defined by the United Nations’ International Telecommunication Union (ITU) as having a specific range of upload and download speeds. These standards have not yet been set for 6G—the ITU will likely do so late next year—but industry experts are expecting it to be anywhere from 10 to 1,000 times faster than current 5G networks. It will achieve this by using higher-frequency radio waves than its predecessors. This will provide a faster connection with fewer network delays.

No matter how fast the new network turns out to be, it could enable futuristic technology, according to Lingjia Liu, a leading 6G researcher and a professor of electrical and computer engineering at Virginia Tech. “Wi-Fi provides good service, but 6G is being designed to provide even better service than your home router, especially in the latency department, to address the growing remote workforce,” Liu says. This would likely result in a wave of new applications that have been unfathomable at current network speeds. For example, your phone could serve as a router, self-driving cars may be able to communicate with one another almost instantaneously, and mobile devices might become completely hands-free. “The speed of 6G will enable applications that we may not even imagine today. The goal for the industry is to have the global coverage and support ready for those applications when they come,” Liu says.

Scaling GAIA-1: 9-billion parameter generative world model for autonomous driving

GAIA-1 is a cutting-edge generative world model built for autonomous driving. A world model learns representations of the environment and its future dynamics, providing a structured understanding of the surroundings that can be leveraged for making informed decisions when driving. Predicting future events is a fundamental and critical aspect of autonomous systems. Accurate future prediction enables autonomous vehicles to anticipate and plan their actions, enhancing safety and efficiency on the road. Incorporating world models into driving models yields the potential to enable them to understand human decisions better and ultimately generalise to more real-world situations.

GAIA-1 is a model that leverages video, text and action inputs to generate realistic driving videos and offers fine-grained control over ego-vehicle behaviour and scene features. Due to its multi-modal nature, GAIA-1 can generate videos from many prompt modalities and combinations.


Examples of types of prompts that GAIA-1 can use to generate videos. GAIA-1 can generate videos by performing the future rollout starting from a video prompt. These future rollouts can be further conditioned on actions to influence particular behaviours of the ego-vehicle (e.g. steer left), or by text to drive a change in some aspects of the scene (e.g. change the colour of the traffic light). For speed and curvature, we condition GAIA-1 by passing the sequence of future speed and/or curvature values. GAIA-1 can also generate realistic videos from text prompts, or by simply drawing samples from its prior distribution (fully unconditional generation).

AI is getting better at hurricane forecasting

Hurricane Lee wasn’t bothering anyone in early September, churning far out at sea somewhere between Africa and North America. A wall of high pressure stood in its westward path, poised to deflect the storm away from Florida and in a grand arc northeast. Heading where, exactly? It was 10 days out from the earliest possible landfall—eons in weather forecasting—but meteorologists at the European Centre for Medium-Range Weather Forecasts, or ECMWF, were watching closely. The tiniest uncertainties could make the difference between a rainy day in Scotland or serious trouble for the US Northeast.

Typically, weather forecasters would rely on models of atmospheric physics to make that call. This time, they had another tool: a new generation of AI-based weather models developed by chipmaker Nvidia, Chinese tech giant Huawei, and Google’s AI unit DeepMind.

National Security Agency is starting an artificial intelligence security center

The — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems, the agency’s outgoing director announced Thursday.

Army Gen. Paul Nakasone said the center would be incorporated into the NSA’s Cybersecurity Collaboration Center, where it works with private industry and international partners to harden the U.S. defense-industrial base against threats from adversaries led by China and Russia.

“We maintain an advantage in AI in the United States today. That AI advantage should not be taken for granted,” Nakasone said at the National Press Club, emphasizing the threat from Beijing in particular.”


The National Security Agency is starting an artificial intelligence security center — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems.

Researchers Develop AI Model to Improve Tumor Removal Accuracy During Breast Cancer Surgery

Kristalyn Gallagher, DO, Kevin Chen, MD, and Shawn Gomez, EngScD, in the UNC School of Medicine have developed an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery.

Artificial intelligence (AI) and machine learning tools have received a lot of attention recently, with the majority of discussions focusing on proper use. However, this technology has a wide range of practical applications, from predicting natural disasters to addressing racial inequalities and now, assisting in cancer surgery.

A new clinical and research partnership between the UNC Department of Surgery, the Joint UNC-NCSU Department of Biomedical Engineering, and the UNC Lineberger Comprehensive Cancer Center has created an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery. Their findings were published in Annals of Surgical Oncology.

AI Designs Unique Walking Robot in Seconds

Summary: Pioneering artificial intelligence (AI) has astoundingly synthesized the design of a functional walking robot in a matter of seconds, illustrating a rapid-fire evolution in stark contrast to nature’s billion-year journey.

This AI, operational on a modest personal computer, crafts entirely innovative structures from scratch, distinguishing it from other AI models reliant on colossal data and high-power computing. The robot, emerging from a straightforward “design a walker” prompt, evolved from an immobile block to a bizarre, porously-holed, three-legged entity, capable of slow, steady locomotion.

Representing more than mere mechanical achievement, this AI-designed organism may mark a paradigm shift, offering a novel, unconstrained perspective on design, innovation, and potential applications in fields ranging from search-and-rescue to medical nanotechnology.

Compact Gene-Editing Enzyme Could Enable More Effective Clinical Therapies

The investigators carried out animal trials with the engineered AsCas12f system, partnering it with other genes and administering it to live mice. The encouraging results indicated that engineered AsCas12f has the potential to be used for human gene therapies, such as treating hemophilia.

The team discovered numerous potentially effective combinations for engineering an improved AsCas12f gene-editing system, and acknowledged the possibility that the selected mutations may not have been the most optimal of all the available mixes. As a next step, computational modeling or machine learning could be used to sift through the combinations and predict which might offer even better improvements.

And as the authors noted, by applying the same approach to other Cas enzymes, it may be possible to generate efficient genome-editing enzymes capable of targeting a wide range of genes. “The compact size of AsCas12f offers an attractive feature for AAV-deliverable gRNA and partner genes, such as base editors and epigenome modifiers. Therefore, our newly engineered AsCas12f systems could be a promising genome-editing platform … Moreover, with suitable adaptations to the evaluation system, this approach can be applied to enzymes beyond the scope of genome editing.”

MilliMobile is a tiny, self-driving robot powered only by light and radio waves

Small mobile robots carrying sensors could perform tasks like catching gas leaks or tracking warehouse inventory. But moving robots demands a lot of energy, and batteries, the typical power source, limit lifetime and raise environmental concerns. Researchers have explored various alternatives: affixing sensors to insects, keeping charging mats nearby, or powering the robots with lasers. Each has drawbacks: Insects roam, chargers limit range, and lasers can burn people’s eyes.

Researchers at the University of Washington have now created MilliMobile, a tiny, self-driving robot powered only by surrounding light or radio waves. Equipped with a solar panel-like energy harvester and four wheels, MilliMobile is about the size of a penny, weighs as much as a raisin and can move about the length of a bus (30 feet, or 10 meters) in an hour even on a cloudy day. The robot can drive on surfaces such as concrete or packed soil and carry three times its own weight in equipment like a camera or sensors. It uses a to move automatically toward light sources so it can run indefinitely on harvested power.

The team will present its research Oct. 2 at the ACM MobiCom 2023 conference in Madrid, Spain.

Google Maps can now tell exactly where solar panels should be installed

Google Maps can now calculate rooftops’ solar potential, track air quality, and forecast pollen counts.

The platform recently launched a range of services like Solar API, which calculates weather patterns and pulls data from aerial imagery to help understand rooftops’ solar potential. The tool aims to help accelerate solar panel deployment by improving accuracy and reducing the number of site visits needed.

As seasonal allergies get worse every year, Pollen API shows updated information on the most common allergens in 65 countries by using a mix of machine learning and wind patterns. Similarly, Air Quality API provides detailed information on local air quality by utilizing data from multiple sources, like government monitoring stations, satellites, live traffic, and more, and can show areas affected by wildfires too.