Toggle light / dark theme

The — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems, the agency’s outgoing director announced Thursday.

Army Gen. Paul Nakasone said the center would be incorporated into the NSA’s Cybersecurity Collaboration Center, where it works with private industry and international partners to harden the U.S. defense-industrial base against threats from adversaries led by China and Russia.

“We maintain an advantage in AI in the United States today. That AI advantage should not be taken for granted,” Nakasone said at the National Press Club, emphasizing the threat from Beijing in particular.”


The National Security Agency is starting an artificial intelligence security center — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems.

Kristalyn Gallagher, DO, Kevin Chen, MD, and Shawn Gomez, EngScD, in the UNC School of Medicine have developed an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery.

Artificial intelligence (AI) and machine learning tools have received a lot of attention recently, with the majority of discussions focusing on proper use. However, this technology has a wide range of practical applications, from predicting natural disasters to addressing racial inequalities and now, assisting in cancer surgery.

A new clinical and research partnership between the UNC Department of Surgery, the Joint UNC-NCSU Department of Biomedical Engineering, and the UNC Lineberger Comprehensive Cancer Center has created an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery. Their findings were published in Annals of Surgical Oncology.

Summary: Pioneering artificial intelligence (AI) has astoundingly synthesized the design of a functional walking robot in a matter of seconds, illustrating a rapid-fire evolution in stark contrast to nature’s billion-year journey.

This AI, operational on a modest personal computer, crafts entirely innovative structures from scratch, distinguishing it from other AI models reliant on colossal data and high-power computing. The robot, emerging from a straightforward “design a walker” prompt, evolved from an immobile block to a bizarre, porously-holed, three-legged entity, capable of slow, steady locomotion.

Representing more than mere mechanical achievement, this AI-designed organism may mark a paradigm shift, offering a novel, unconstrained perspective on design, innovation, and potential applications in fields ranging from search-and-rescue to medical nanotechnology.

The investigators carried out animal trials with the engineered AsCas12f system, partnering it with other genes and administering it to live mice. The encouraging results indicated that engineered AsCas12f has the potential to be used for human gene therapies, such as treating hemophilia.

The team discovered numerous potentially effective combinations for engineering an improved AsCas12f gene-editing system, and acknowledged the possibility that the selected mutations may not have been the most optimal of all the available mixes. As a next step, computational modeling or machine learning could be used to sift through the combinations and predict which might offer even better improvements.

And as the authors noted, by applying the same approach to other Cas enzymes, it may be possible to generate efficient genome-editing enzymes capable of targeting a wide range of genes. “The compact size of AsCas12f offers an attractive feature for AAV-deliverable gRNA and partner genes, such as base editors and epigenome modifiers. Therefore, our newly engineered AsCas12f systems could be a promising genome-editing platform … Moreover, with suitable adaptations to the evaluation system, this approach can be applied to enzymes beyond the scope of genome editing.”

Small mobile robots carrying sensors could perform tasks like catching gas leaks or tracking warehouse inventory. But moving robots demands a lot of energy, and batteries, the typical power source, limit lifetime and raise environmental concerns. Researchers have explored various alternatives: affixing sensors to insects, keeping charging mats nearby, or powering the robots with lasers. Each has drawbacks: Insects roam, chargers limit range, and lasers can burn people’s eyes.

Researchers at the University of Washington have now created MilliMobile, a tiny, self-driving robot powered only by surrounding light or radio waves. Equipped with a solar panel-like energy harvester and four wheels, MilliMobile is about the size of a penny, weighs as much as a raisin and can move about the length of a bus (30 feet, or 10 meters) in an hour even on a cloudy day. The robot can drive on surfaces such as concrete or packed soil and carry three times its own weight in equipment like a camera or sensors. It uses a to move automatically toward light sources so it can run indefinitely on harvested power.

The team will present its research Oct. 2 at the ACM MobiCom 2023 conference in Madrid, Spain.

Google Maps can now calculate rooftops’ solar potential, track air quality, and forecast pollen counts.

The platform recently launched a range of services like Solar API, which calculates weather patterns and pulls data from aerial imagery to help understand rooftops’ solar potential. The tool aims to help accelerate solar panel deployment by improving accuracy and reducing the number of site visits needed.

As seasonal allergies get worse every year, Pollen API shows updated information on the most common allergens in 65 countries by using a mix of machine learning and wind patterns. Similarly, Air Quality API provides detailed information on local air quality by utilizing data from multiple sources, like government monitoring stations, satellites, live traffic, and more, and can show areas affected by wildfires too.

TOKYO, Oct 4 (Reuters) — SoftBank (9984.T) CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realised within 10 years.

Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

“It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

For more information on liver cancer treatment or #YaleMedicine, visit: https://www.yalemedicine.org/stories/artificial-intelligence-liver-cancer.

With liver cancer on the rise (deaths rose 25% between 2006 and 2015, according to the CDC), doctors and researchers at the Yale Cancer Center are highly focused on finding new and better treatment options. A unique collaboration between Yale Medicine physicians and researchers and biomedical engineers from Yale’s School of Engineering uses artificial intelligence (AI) to pinpoint the specific treatment approach for each patient. First doctors need to understand as much as possible about a particular patient’s cancer. To this end, medical imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) are valuable tools for early detection, accurate diagnosis, and effective treatment of liver cancer. For every patient, physicians need to interpret and analyze these images, along with a multitude of other clinical data points, to make treatment decisions likeliest to lead to a positive outcome. “There’s a lot of data that needs to be considered in terms of making a recommendation on how to manage a patient,” says Jeffrey Pollak, MD, Robert I. White, Jr. Professor of Radiology and Biomedical Imaging. “It can become quite complex.” To help, researchers are developing AI tools to help doctors tackle that vast amount of data. In this video, Julius Chaprio, MD, PhD, explains how collaboration with biomedical engineers like Lawrence Staib, PhD, facilitated the development of specialized AI algorithms that can sift through patient information, recognize important patterns, and streamline the clinical decision-making process. The ultimate goal of this research is to bridge the gap between complex clinical data and patient care. “It’s an advanced tool, just like all the others in the physician’s toolkit,” says Dr. Staib. “But this one is based on algorithms instead of a stethoscope.”

The more physicists use artificial intelligence and machine learning, the more important it becomes for them to understand why the technology works and when it fails.

The advent of ChatGPT, Bard, and other large language models (LLM) has naturally excited everybody, including the entire physics community. There are many evolving questions for physicists about LLMs in particular and artificial intelligence (AI) in general. What do these stupendous developments in large-data technology mean for physics? How can they be incorporated in physics? What will be the role of machine learning (ML) itself in the process of physics discovery?

Before I explore the implications of those questions, I should point out there is no doubt that AI and ML will become integral parts of physics research and education. Even so, similar to the role of AI in human society, we do not know how this new and rapidly evolving technology will affect physics in the long run, just as our predecessors did not know how transistors or computers would affect physics when the technologies were being developed in the early 1950s. What we do know is that the impact of AI/ML on physics will be profound and ever evolving as the technology develops.