Toggle light / dark theme

Author(s): Jesus Rodriguez Originally published on Towards AI. Created Using IdeogramI recently started an AI-focused educational newsletter, that already has over 170,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

For human researchers, it takes many years of work to discover new super-resolution microscopy techniques. The number of possible optical configurations of a microscope—for example, where to place mirrors or lenses—is enormous.

Researchers at the Max Planck Institute for the Science of Light (MPL) have developed an artificial intelligence (AI) framework which autonomously discovers new experimental designs in microscopy. The framework, called XLuminA, performs optimizations 10,000 times faster than well-established methods.

The researchers’ work is published in Nature Communications.

Quantum calculations of molecular systems often require extraordinary amounts of computing power; these calculations are typically performed on the world’s largest supercomputers to better understand real-world products such as batteries and semiconductors.

Now, UC Berkeley and Lawrence Berkeley National Laboratory (Berkeley Lab) researchers have developed a new machine learning method that significantly speeds up by improving model scalability. This approach reduces the computing memory required for simulations by more than fivefold compared to existing models and delivers results over ten times faster.

Their research has been accepted at Neural Information Processing Systems (NeurIPS) 2024, a conference and publication venue in artificial intelligence and machine learning. They will present their work at the conference on December 13, and a version of their paper is available on the arXiv preprint server.

Large-scale protein and gene profiling have massively expanded the landscape of cancer-associated proteins and gene mutations, but it has been difficult to discern whether they play an active role in the disease or are innocent bystanders. In a study published in Nature Cancer, researchers at Baylor College of Medicine revealed a powerful and unbiased machine learning-based approach called FunMap for assessing the role of cancer-associated mutations and understudied proteins, with broad implications for advancing cancer biology and informing therapeutic strategies.

“Gaining functional information on the genes and proteins associated with cancer is an important step toward better understanding the disease and identifying potential therapeutic targets,” said corresponding author Dr. Bing Zhang, professor of molecular and human genetics and part of the Lester and Sue Smith Breast Center at Baylor.

“Our approach to gain functional insights into these genes and proteins involved using machine learning to develop a network mapping their functional relationships,” said Zhang, member of Baylor’s Dan L Duncan Comprehensive Cancer Center and a McNair Scholar. “It’s like, I may not know anything about you, but if I know your LinkedIn connections, I can infer what you do.”

Artificial intelligence is no longer just a buzzword; it’s a transformative force reshaping industries, from healthcare to finance to retail. However, behind every successful AI system lies an often-overlooked truth: AI is only as good as the data that powers it.

Organizations eager to adopt AI frequently focus on algorithms and technologies while neglecting the critical foundation—data. Even the most advanced AI initiatives are doomed to fail without a robust data strategy. I’ll explore why a solid data strategy is the cornerstone of successful AI implementation and provide actionable steps to craft one.

Imagine building a skyscraper without solid ground beneath it. Data plays a similar foundational role for AI. It feeds machine learning models, drives predictions and shapes insights. However, as faulty materials weaken a structure, poor-quality data can derail an AI project.

From the early days of mechanical automatons to more recent conversational bots, scientists and engineers have dreamed of a future where AI systems can work and act intelligently and independently. Recent advances in agentic AI bring that autonomous future a step closer to reality. With their supercharged reasoning and execution capabilities, agentic AI systems promise to transform many aspects of human-machine collaboration. The agentic AI prize could be great, with the promise of greater productivity, innovation and insights for the human workforce. But so, too, are the risks: the potential for bias, mistakes, and inappropriate use. Early action by business and government leaders now will help set the right course for agentic AI development, so that its benefits can be achieved safely and fairly.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.397606 data-title= What Is Agentic AI, and How Will It Change Work? data-url=/2024/12/what-is-agentic-ai-and-how-will-it-change-work data-topic= Generative AI data-authors= Mark Purdy data-content-type= Digital Article data-content-image=/resources/images/article_assets/2024/12/Dec24_12_1450615814-383x215.jpg data-summary=

The next era of human-machine collaboration will present new opportunities and challenges.