Toggle light / dark theme

Year 2022 😗


Automation is not just for high-throughput screening anymore. New devices and greater flexibility are transforming what’s possible throughout drug discovery and development. This article was written by Thomas Albanetti, AstraZeneca; Ryan Bernhardt, Biosero; Andrew Smith, AstraZeneca and Kevin Stewart, AstraZeneca for a 28-page DDW eBook, sponsored by Bio-Rad. Download the full eBook here.

A utomation has been a part of the drug discovery industry for decades. The earliest iterations of these systems were used in large pharmaceutical companies for high-throughput screening (HTS) experiments. HTS enabled the testing of libraries of small molecule compounds by a single or a series of multiple experimental conditions to i dentify the potential of those compounds as a treatment for a target disease. HTS has evolved to enable screening libraries of millions of compounds, but the high cost of equipment has largely resulted in automation occurring primarily in large pharmaceutical companies. Today, though, new types of robots paired with sophisticated software tools have helped to democratise access to automation, making it possible for pharma and biotechnology companies of almost any size to deploy these solutions in their labs.

Originally, automated solutions were only implemented for projects that involved a lot of repetitive tasks, which is typical of high-throughput experiments and assays. The equipment used in early automation efforts was expensive, specialised, and physically integrated together, effectively making the equipment unavailable for any non-automated use. Now, both the approaches and equipment are far more adaptive and flexible. The latest automation software is also much simpler to program, making it easier to swap in different instruments and robots as needed. For example, labs can run a particular HTS assay for a few weeks and then quickly pivot to run a new assay. Labs can also create and run bespoke standard operating procedures, assays, and experiments for drug targets they are interested in pursuing.

Support us! https://www.patreon.com/mlst.

Irina Rish is a world-renowned professor of computer science and operations research at the UniversitĂ© de MontrĂ©al and a core member of the prestigious Mila organisation. She is a Canada CIFAR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Prof. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modelling. She has served as a Senior Area Chair for NeurIPS and ICML. Irina’s research is focussed on taking us closer to the holy grail of Artificial General Intelligence. She continues to push the boundaries of machine learning, continually striving to make advancements in neuroscience-inspired AI.

In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties.

Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present.

To better understand AI and intelligence, Irina suggested looking at it from multiple perspectives and understanding the importance of complex systems science in programming and understanding dynamical systems. She discussed the work of Michael Levin, who is looking into reprogramming biological computers with chemical interventions, and Tim mentioned Alex Mordvinsev, who is looking into the self-healing and repair of these systems. Ultimately, Irina argued that the key to understanding AI and intelligence is to recognize the complexity of the systems and to create hybrid models of human and AI intelligence.

Find Irina;

Year 2020 o.o!


Explorations into the nature of reality have been undertaken across the ages, and in the contemporary world, disparate tools, from gedanken experiments [1–4], experimental consistency checks [5,6] to machine learning and artificial intelligence are being used to illuminate the fundamental layers of reality [7]. A theory of everything, a grand unified theory of physics and nature, has been elusive for the world of Physics. While unifying various forces and interactions in nature, starting from the unification of electricity and magnetism in James Clerk Maxwell’s seminal work A Treatise on Electricity and Magnetism [8] to the electroweak unification by Weinberg-Salam-Glashow [9–11] and research in the direction of establishing the Standard Model including the QCD sector by Murray Gell-Mann and Richard Feynman [12,13], has seen developments in a slow but surefooted manner, we now have a few candidate theories of everything, primary among which is String Theory [14]. Unfortunately, we are still some way off from establishing various areas of the theory in an empirical manner. Chief among this is the concept of supersymmetry [15], which is an important part of String Theory. There were no evidences found for supersymmetry in the first run of the Large Hadron Collider [16]. When the Large Hadron Collider discovered the Higgs Boson in 2011-12 [17–19], there were results that were problematic for the Minimum Supersymmetric Model (MSSM), since the value of the mass of the Higgs Boson at 125 GeV is relatively large for the model and could only be attained with large radiative loop corrections from top squarks that many theoreticians considered to be ‘unnatural’ [20]. In the absence of experiments that can test certain frontiers of Physics, particularly due to energy constraints particularly at the smallest of scales, the importance of simulations and computational research cannot be underplayed. Gone are the days when Isaac Newton purportedly could sit below an apple tree and infer the concept of classical gravity from an apple that had fallen on his head. In today’s age, we have increasing levels of computational inputs and power that factor in when considering avenues of new research in Physics. For instance, M-Theory, introduced by Edward Witten in 1995 [21], is a promising approach to a unified model of Physics that includes quantum gravity. It extends the formalism of String Theory. There have been computational tools relating to machine learning that have lately been used for solving M-Theory geometries [22]. TensorFlow, a computing platform normally used for machine learning, helped in finding 194 equilibrium solutions for one particular type of M-Theory spacetime geometries [23–25].

Artificial intelligence has been one of the primary areas of interest in computational pursuits around Physics research. In 2020, Matsubara Takashi (Osaka University) and Yaguchi Takaharu (Kobe University), along with their research group, were successful in developing technology that could simulate phenomena for which we do not have the detailed formula or mechanism, using artificial intelligence [26]. The underlying step here is the creation of a model from observational data, constrained by the model being consistent and faithful to the laws of Physics. In this pursuit, the researchers utilized digital calculus as well as geometrical approach, such as those of Riemannian geometry and symplectic geometry.

While “protein” often evokes pictures of chicken breasts, these molecules are more similar to an intricate Lego puzzle. Building a protein starts with a string of amino acids—think a myriad of Christmas lights on a string— which then fold into 3D structures (like rumpling them up for storage).

DeepMind and Baker both made waves when they each developed algorithms to predict the structure of any protein based on their amino acid sequence. It was no simple endeavor; the predictions were mapped at the atomic level.

Designing new proteins raises the complexity to another level. This year Baker’s lab took a stab at it, with one effort using good old screening techniques and another relying on deep learning hallucinations. Both algorithms are extremely powerful for demystifying natural proteins and generating new ones, but they were hard to scale up.

Since OpenAI released ChatGPT, there has been a lot of speculation about what its killer app will be. And perhaps topping the list is online search. According to The New York Times, Google’s management has declared a “code red” and is scrambling to protect its online search monopoly against the disruption that ChatGPT will bring.

ChatGPT is a wonderful technology, one that has a great chance of redefining the way we create and interact with digital information. It can have many interesting applications, including for online search.

But it might be a bit of a stretch to claim that it will dethrone Google—at least from what we have seen so far. For the moment, large language models (LLM) have many problems that need to be fixed before they can possibly challenge search engines. And even when the technology matures, Google Search might be positioned to gain the most from LLMs.