Testing multiple treatments for heterogeneous (varying) effectiveness with respect to many underlying risk factors requires many pairwise tests; we would like to instead automatically discover and visualize patient archetypes and predictors of treatment effectiveness using multitask machine learning. In this paper, we present a method to estimate these heterogeneous treatment effects with an interpretable hierarchical framework that uses additive models to visualize expected treatment benefits as a function of patient factors (identifying personalized treatment benefits) and concurrent treatments (identifying combinatorial treatment benefits). This method achieves state-of-the-art predictive power for COVID-19 in-hospital mortality and interpretable identification of heterogeneous treatment benefits. We first validate this method on the large public MIMIC-IV dataset of ICU patients to test recovery of heterogeneous treatment effects. Next we apply this method to a proprietary dataset of over 3,000 patients hospitalized for COVID-19, and find evidence of heterogeneous treatment effectiveness predicted largely by indicators of inflammation and thrombosis risk: patients with few indicators of thrombosis risk benefit most from treatments against inflammation, while patients with few indicators of inflammation risk benefit most from treatments against thrombosis. This approach provides an automated methodology to discover heterogeneous and individualized effectiveness of treatments.
Automation is not just for high-throughput screening anymore. New devices and greater flexibility are transforming whatâs possible throughout drug discovery and development. This article was written by Thomas Albanetti, AstraZeneca; Ryan Bernhardt, Biosero; Andrew Smith, AstraZeneca and Kevin Stewart, AstraZeneca for a 28-page DDW eBook, sponsored by Bio-Rad. Download the full eBook here.
A utomation has been a part of the drug discovery industry for decades. The earliest iterations of these systems were used in large pharmaceutical companies for high-throughput screening (HTS) experiments. HTS enabled the testing of libraries of small molecule compounds by a single or a series of multiple experimental conditions to i dentify the potential of those compounds as a treatment for a target disease. HTS has evolved to enable screening libraries of millions of compounds, but the high cost of equipment has largely resulted in automation occurring primarily in large pharmaceutical companies. Today, though, new types of robots paired with sophisticated software tools have helped to democratise access to automation, making it possible for pharma and biotechnology companies of almost any size to deploy these solutions in their labs.
Originally, automated solutions were only implemented for projects that involved a lot of repetitive tasks, which is typical of high-throughput experiments and assays. The equipment used in early automation efforts was expensive, specialised, and physically integrated together, effectively making the equipment unavailable for any non-automated use. Now, both the approaches and equipment are far more adaptive and flexible. The latest automation software is also much simpler to program, making it easier to swap in different instruments and robots as needed. For example, labs can run a particular HTS assay for a few weeks and then quickly pivot to run a new assay. Labs can also create and run bespoke standard operating procedures, assays, and experiments for drug targets they are interested in pursuing.
In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties.
Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present.
To better understand AI and intelligence, Irina suggested looking at it from multiple perspectives and understanding the importance of complex systems science in programming and understanding dynamical systems. She discussed the work of Michael Levin, who is looking into reprogramming biological computers with chemical interventions, and Tim mentioned Alex Mordvinsev, who is looking into the self-healing and repair of these systems. Ultimately, Irina argued that the key to understanding AI and intelligence is to recognize the complexity of the systems and to create hybrid models of human and AI intelligence.
Explorations into the nature of reality have been undertaken across the ages, and in the contemporary world, disparate tools, from gedanken experiments [1â4], experimental consistency checks [5,6] to machine learning and artificial intelligence are being used to illuminate the fundamental layers of reality [7]. A theory of everything, a grand unified theory of physics and nature, has been elusive for the world of Physics. While unifying various forces and interactions in nature, starting from the unification of electricity and magnetism in James Clerk Maxwellâs seminal work A Treatise on Electricity and Magnetism [8] to the electroweak unification by Weinberg-Salam-Glashow [9â11] and research in the direction of establishing the Standard Model including the QCD sector by Murray Gell-Mann and Richard Feynman [12,13], has seen developments in a slow but surefooted manner, we now have a few candidate theories of everything, primary among which is String Theory [14]. Unfortunately, we are still some way off from establishing various areas of the theory in an empirical manner. Chief among this is the concept of supersymmetry [15], which is an important part of String Theory. There were no evidences found for supersymmetry in the first run of the Large Hadron Collider [16]. When the Large Hadron Collider discovered the Higgs Boson in 2011-12 [17â19], there were results that were problematic for the Minimum Supersymmetric Model (MSSM), since the value of the mass of the Higgs Boson at 125 GeV is relatively large for the model and could only be attained with large radiative loop corrections from top squarks that many theoreticians considered to be âunnaturalâ [20]. In the absence of experiments that can test certain frontiers of Physics, particularly due to energy constraints particularly at the smallest of scales, the importance of simulations and computational research cannot be underplayed. Gone are the days when Isaac Newton purportedly could sit below an apple tree and infer the concept of classical gravity from an apple that had fallen on his head. In todayâs age, we have increasing levels of computational inputs and power that factor in when considering avenues of new research in Physics. For instance, M-Theory, introduced by Edward Witten in 1995 [21], is a promising approach to a unified model of Physics that includes quantum gravity. It extends the formalism of String Theory. There have been computational tools relating to machine learning that have lately been used for solving M-Theory geometries [22]. TensorFlow, a computing platform normally used for machine learning, helped in finding 194 equilibrium solutions for one particular type of M-Theory spacetime geometries [23â25].
Artificial intelligence has been one of the primary areas of interest in computational pursuits around Physics research. In 2020, Matsubara Takashi (Osaka University) and Yaguchi Takaharu (Kobe University), along with their research group, were successful in developing technology that could simulate phenomena for which we do not have the detailed formula or mechanism, using artificial intelligence [26]. The underlying step here is the creation of a model from observational data, constrained by the model being consistent and faithful to the laws of Physics. In this pursuit, the researchers utilized digital calculus as well as geometrical approach, such as those of Riemannian geometry and symplectic geometry.
Year 2022 What they find is a new type of physics generated by their artificial intelligence.
The determination of state variables to describe physical systems is a challenging task. A data-driven approach is proposed to automatically identify state variables for unknown systems from high-dimensional observational data.
Artificial intelligence approaches inspired by human cognitive function have usually single learned ability. The authors propose a multimodal foundation model that demonstrates the cross-domain learning and adaptation for broad range of downstream cognitive tasks.
While âproteinâ often evokes pictures of chicken breasts, these molecules are more similar to an intricate Lego puzzle. Building a protein starts with a string of amino acidsâthink a myriad of Christmas lights on a stringâ which then fold into 3D structures (like rumpling them up for storage).
DeepMind and Baker both made waves when they each developed algorithms to predict the structure of any protein based on their amino acid sequence. It was no simple endeavor; the predictions were mapped at the atomic level.
Designing new proteins raises the complexity to another level. This year Bakerâs lab took a stab at it, with one effort using good old screening techniques and another relying on deep learning hallucinations. Both algorithms are extremely powerful for demystifying natural proteins and generating new ones, but they were hard to scale up.