Toggle light / dark theme

Compact Gene-Editing Enzyme Could Enable More Effective Clinical Therapies

The investigators carried out animal trials with the engineered AsCas12f system, partnering it with other genes and administering it to live mice. The encouraging results indicated that engineered AsCas12f has the potential to be used for human gene therapies, such as treating hemophilia.

The team discovered numerous potentially effective combinations for engineering an improved AsCas12f gene-editing system, and acknowledged the possibility that the selected mutations may not have been the most optimal of all the available mixes. As a next step, computational modeling or machine learning could be used to sift through the combinations and predict which might offer even better improvements.

And as the authors noted, by applying the same approach to other Cas enzymes, it may be possible to generate efficient genome-editing enzymes capable of targeting a wide range of genes. “The compact size of AsCas12f offers an attractive feature for AAV-deliverable gRNA and partner genes, such as base editors and epigenome modifiers. Therefore, our newly engineered AsCas12f systems could be a promising genome-editing platform … Moreover, with suitable adaptations to the evaluation system, this approach can be applied to enzymes beyond the scope of genome editing.”

MilliMobile is a tiny, self-driving robot powered only by light and radio waves

Small mobile robots carrying sensors could perform tasks like catching gas leaks or tracking warehouse inventory. But moving robots demands a lot of energy, and batteries, the typical power source, limit lifetime and raise environmental concerns. Researchers have explored various alternatives: affixing sensors to insects, keeping charging mats nearby, or powering the robots with lasers. Each has drawbacks: Insects roam, chargers limit range, and lasers can burn people’s eyes.

Researchers at the University of Washington have now created MilliMobile, a tiny, self-driving robot powered only by surrounding light or radio waves. Equipped with a solar panel-like energy harvester and four wheels, MilliMobile is about the size of a penny, weighs as much as a raisin and can move about the length of a bus (30 feet, or 10 meters) in an hour even on a cloudy day. The robot can drive on surfaces such as concrete or packed soil and carry three times its own weight in equipment like a camera or sensors. It uses a to move automatically toward light sources so it can run indefinitely on harvested power.

The team will present its research Oct. 2 at the ACM MobiCom 2023 conference in Madrid, Spain.

Google Maps can now tell exactly where solar panels should be installed

Google Maps can now calculate rooftops’ solar potential, track air quality, and forecast pollen counts.

The platform recently launched a range of services like Solar API, which calculates weather patterns and pulls data from aerial imagery to help understand rooftops’ solar potential. The tool aims to help accelerate solar panel deployment by improving accuracy and reducing the number of site visits needed.

As seasonal allergies get worse every year, Pollen API shows updated information on the most common allergens in 65 countries by using a mix of machine learning and wind patterns. Similarly, Air Quality API provides detailed information on local air quality by utilizing data from multiple sources, like government monitoring stations, satellites, live traffic, and more, and can show areas affected by wildfires too.

SoftBank CEO Son says artificial general intelligence will come within 10 years

TOKYO, Oct 4 (Reuters) — SoftBank (9984.T) CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realised within 10 years.

Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

“It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

Using Artificial Intelligence to Personalize Liver Cancer Treatment

For more information on liver cancer treatment or #YaleMedicine, visit: https://www.yalemedicine.org/stories/artificial-intelligence-liver-cancer.

With liver cancer on the rise (deaths rose 25% between 2006 and 2015, according to the CDC), doctors and researchers at the Yale Cancer Center are highly focused on finding new and better treatment options. A unique collaboration between Yale Medicine physicians and researchers and biomedical engineers from Yale’s School of Engineering uses artificial intelligence (AI) to pinpoint the specific treatment approach for each patient. First doctors need to understand as much as possible about a particular patient’s cancer. To this end, medical imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) are valuable tools for early detection, accurate diagnosis, and effective treatment of liver cancer. For every patient, physicians need to interpret and analyze these images, along with a multitude of other clinical data points, to make treatment decisions likeliest to lead to a positive outcome. “There’s a lot of data that needs to be considered in terms of making a recommendation on how to manage a patient,” says Jeffrey Pollak, MD, Robert I. White, Jr. Professor of Radiology and Biomedical Imaging. “It can become quite complex.” To help, researchers are developing AI tools to help doctors tackle that vast amount of data. In this video, Julius Chaprio, MD, PhD, explains how collaboration with biomedical engineers like Lawrence Staib, PhD, facilitated the development of specialized AI algorithms that can sift through patient information, recognize important patterns, and streamline the clinical decision-making process. The ultimate goal of this research is to bridge the gap between complex clinical data and patient care. “It’s an advanced tool, just like all the others in the physician’s toolkit,” says Dr. Staib. “But this one is based on algorithms instead of a stethoscope.”

How AI and ML Will Affect Physics

The more physicists use artificial intelligence and machine learning, the more important it becomes for them to understand why the technology works and when it fails.

The advent of ChatGPT, Bard, and other large language models (LLM) has naturally excited everybody, including the entire physics community. There are many evolving questions for physicists about LLMs in particular and artificial intelligence (AI) in general. What do these stupendous developments in large-data technology mean for physics? How can they be incorporated in physics? What will be the role of machine learning (ML) itself in the process of physics discovery?

Before I explore the implications of those questions, I should point out there is no doubt that AI and ML will become integral parts of physics research and education. Even so, similar to the role of AI in human society, we do not know how this new and rapidly evolving technology will affect physics in the long run, just as our predecessors did not know how transistors or computers would affect physics when the technologies were being developed in the early 1950s. What we do know is that the impact of AI/ML on physics will be profound and ever evolving as the technology develops.

AI co-pilot enhances human precision for safer aviation

Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between and machine, rooted in understanding .

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the , it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

Is explosive growth ahead for AI?

As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Few question the potential for change that AI presents. But in a world of litigation, and ethical boundaries, will AI be able to thrive?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

/* */