Toggle light / dark theme

Using Artificial Intelligence to Personalize Liver Cancer Treatment

For more information on liver cancer treatment or #YaleMedicine, visit: https://www.yalemedicine.org/stories/artificial-intelligence-liver-cancer.

With liver cancer on the rise (deaths rose 25% between 2006 and 2015, according to the CDC), doctors and researchers at the Yale Cancer Center are highly focused on finding new and better treatment options. A unique collaboration between Yale Medicine physicians and researchers and biomedical engineers from Yale’s School of Engineering uses artificial intelligence (AI) to pinpoint the specific treatment approach for each patient. First doctors need to understand as much as possible about a particular patient’s cancer. To this end, medical imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) are valuable tools for early detection, accurate diagnosis, and effective treatment of liver cancer. For every patient, physicians need to interpret and analyze these images, along with a multitude of other clinical data points, to make treatment decisions likeliest to lead to a positive outcome. “There’s a lot of data that needs to be considered in terms of making a recommendation on how to manage a patient,” says Jeffrey Pollak, MD, Robert I. White, Jr. Professor of Radiology and Biomedical Imaging. “It can become quite complex.” To help, researchers are developing AI tools to help doctors tackle that vast amount of data. In this video, Julius Chaprio, MD, PhD, explains how collaboration with biomedical engineers like Lawrence Staib, PhD, facilitated the development of specialized AI algorithms that can sift through patient information, recognize important patterns, and streamline the clinical decision-making process. The ultimate goal of this research is to bridge the gap between complex clinical data and patient care. “It’s an advanced tool, just like all the others in the physician’s toolkit,” says Dr. Staib. “But this one is based on algorithms instead of a stethoscope.”

How AI and ML Will Affect Physics

The more physicists use artificial intelligence and machine learning, the more important it becomes for them to understand why the technology works and when it fails.

The advent of ChatGPT, Bard, and other large language models (LLM) has naturally excited everybody, including the entire physics community. There are many evolving questions for physicists about LLMs in particular and artificial intelligence (AI) in general. What do these stupendous developments in large-data technology mean for physics? How can they be incorporated in physics? What will be the role of machine learning (ML) itself in the process of physics discovery?

Before I explore the implications of those questions, I should point out there is no doubt that AI and ML will become integral parts of physics research and education. Even so, similar to the role of AI in human society, we do not know how this new and rapidly evolving technology will affect physics in the long run, just as our predecessors did not know how transistors or computers would affect physics when the technologies were being developed in the early 1950s. What we do know is that the impact of AI/ML on physics will be profound and ever evolving as the technology develops.

AI co-pilot enhances human precision for safer aviation

Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between and machine, rooted in understanding .

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the , it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

Is explosive growth ahead for AI?

As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Few question the potential for change that AI presents. But in a world of litigation, and ethical boundaries, will AI be able to thrive?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Microsoft has joined the race for large language model (LLM) application frameworks with its open source Python library, AutoGen.

As described by Microsoft, AutoGen is “a framework for simplifying the orchestration, optimization, and automation of LLM workflows.” The fundamental concept behind AutoGen is the creation of “agents,” which are programming modules powered by LLMs such as GPT-4. These agents interact with each other through natural language messages to accomplish various tasks.

Humane shows off its futuristic ‘AI Pin’ wearable

In case you missed the hype, Humane is a startup founded by ex-Apple executives that’s working on a device called the “Ai Pin” that uses projectors, cameras and AI tech to act as a sort of wearable AI assistant. Now, the company has unveiled the AI Pin in full at a Paris fashion show (Humane x Coperni) as a way to show off the device’s new form factor. “Supermodel Naomi Campbell is the first person outside of the company to wear the device in public, ahead of its full unveiling on November 9,” Humane wrote.

The company describes the device as a “screenless, standalone device and software platform built from the ground up for AI.” It’s powered by an “advanced” Qualcomm Snapdragon platform and equipped with a mini-projector that takes the place of a smartphone screen, along with a camera and speaker. It can perform functions like AI-powered optical recognition, but is also supposedly “privacy-first” thanks to qualities like no wake word and thus no “always on” listening.”

IonQ Announces 2 New Quantum Systems; Suggests Quantum Advantage is Nearing

It’s been a busy week for IonQ, the quantum computing start-up focused on developing trapped-ion-based systems. At the Quantum World Congress today, the company announced two new systems (Forte Enterprise and Tempo) intended to be rack-mountable and deployable in a traditional data center. Yesterday, speaking at Tabor Communications (HPCwire parent organization) HPC and AI on Wall Street conference, the company made a strong pitch for reaching quantum advantage in 2–3 years, using the new systems.

If you’ve been following quantum computing, you probably know that deploying quantum computers in the datacenter is a rare occurrence. Access to the vast majority NISQ era computers has been through web portals. The latest announcement from IonQ, along with somewhat similar announcement from neutral atom specialist QuEra in August, and increased IBM efforts (Cleveland Clinic and PINQ2) to selectively place on-premise quantum systems suggest change is coming to the market.

IonQ’s two rack-mounted solutions are designed for businesses and governments wanting to integrate quantum capabilities within their existing infrastructure. “Businesses will be able to harness the power of quantum directly from their own data centers, making the technology significantly more accessible and easy to apply to key workflows and business processes,” reported the company. IonQ is calling the new systems enterprise-grade. (see the official announcement.)

/* */