Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Many organizations lack the technology and architecture required to automate decision-making and create intelligent responses across the supply chain, as has been shown by the past few years’ supply chain disruptions. However, these critical breakdowns can no longer be blamed solely on the COVID-19 pandemic. Rather, they can be blamed on businesses’ slow adoption of automated supply chain decision-making, which has resulted in inventory backlogs, price inflation, shortages and more. Further contributing to backlogs is continued single sourcing to one region rather than leveraging distributed regional capabilities. These factors have added to the complexity of systems and the disadvantages of lack of automation and the pandemic brought these existing critical breakdowns into stark relief.
This brings us to today and how this inability to effectively manage data streams is proving debilitating to many companies. In a Gartner study of more than 400 organizations, 84% of chief supply chain officers reported that they could service their customers better with data-driven insights. An equal number of respondents stated that they needed more accurate data in order to predict future conditions and make better decisions.
There was once a time, not so long ago, when scientists like Casey Holliday needed scalpels, scissors and even their own hands to conduct anatomical research. But now, with recent advances in technology, Holliday and his colleagues at the University of Missouri are using artificial intelligence (AI) to see inside an animal or a person—down to a single muscle fiber—without ever making a cut.
Holliday, an associate professor of pathology and anatomical sciences, said his lab in the MU School of Medicine is one of only a handful of labs in the world currently using this high-tech approach.
AI can teach computer programs to identify a muscle fiber in an image, such as a CAT scan. Then, researchers can use that data to develop detailed 3D computer models of muscles to better understand how they work together in the body for motor control, Holliday said.
Conversational AI is a subset of artificial intelligence (AI) that allows consumers to interact with computer applications as if they were interacting with another human. According to Deloitte, the global conversational AI market is set to grow by 22% between 2022 and 2025 and is estimated to reach $14 billion by 2025.
Providing enhanced language customizations to cater to a highly diverse and vast group of hyper-local audiences, many practical applications of this include financial services, hospital wards and conferences, and can take the form of a translation app or a chatbot. According to Gartner, 70% of white-collar workers purportedly regularly interact with conversational platforms, but this is just a drop in the ocean of what can unfold this decade.
Despite the exciting potential within the AI space, there is one significant hurdle; the data used to train conversational AI models does not adequately account for the subtleties of dialect, language, speech patterns and inflection.
In a study that truly underscores the profound and devastating impact humans have on the environment, researchers have found that microscopic bugs are evolving to eat plastic.
The National Institutes of Health will invest $130 million over four years, pending the availability of funds, to accelerate the widespread use of artificial intelligence (AI) by the biomedical and behavioral research communities. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) program is assembling team members from diverse disciplines and backgrounds to generate tools, resources, and richly detailed data that are responsive to AI approaches. At the same time, the program will ensure its tools and data do not perpetuate inequities or ethical problems that may occur during data collection and analysis. Through extensive collaboration across projects, Bridge2AI researchers will create guidance and standards for the development of ethically sourced, state-of-the-art, AI-ready data sets that have the potential to help solve some of the most pressing challenges in human health — such as uncovering how genetic, behavioral, and environmental factors influence a person’s physical condition throughout their life.
“Generating high-quality ethically sourced data sets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” said Lawrence A. Tabak, D.D.S., Ph.D., Performing the Duties of the Director of NIH. “The solutions to long-standing challenges in human health are at our fingertips, and now is the time to connect researchers and AI technologies to tackle our most difficult research questions and ultimately help improve human health.”
AI is both a field of science and a set of technologies that enable computers to mimic how humans sense, learn, reason, and take action. Although AI is already used in biomedical research and healthcare, its widespread adoption has been limited in part due to challenges of applying AI technologies to diverse data types. This is because routinely collected biomedical and behavioral data sets are often insufficient, meaning they lack important contextual information about the data type, collection conditions, or other parameters. Without this information, AI technologies cannot accurately analyze and interpret data. AI technologies may also inadvertently incorporate bias or inequities unless careful attention is paid to the social and ethical contexts in which the data is collected.
Intellia Therapeutics said Friday the first six patients to receive its CRISPR-based treatment for a genetic swelling disorder have safely had small, corrective changes made to dysfunctional DNA inside their liver cells.
Preliminary results from the study — just the second to show that CRISPR-based gene editing can be delivered systemically and performed in vivo, or inside the body — found that the treatment, NTLA-2002, reduced levels of the disease-causing protein, kallikrein, by 65% and 92% in the low-and high-dose cohort, respectively. In the low-dose group, the one-time infusion also reduced by 91% the painful swelling “attacks” commonly experienced by patients with a rare condition called hereditary angioedema, or HAE. Participants in the high-dose group have not yet completed the 16-week observation period.
The research community is excited about the potential of DNA to function as long-term archival storage. That’s largely because it’s extremely dense, chemically stable for tens of thousands of years, and comes in a format we’re unlikely to forget how to read. While there has been some interesting progress, efforts have mostly stayed in the research community because of the high costs and extremely slow read and write speeds. These are problems that need to be solved before DNA-based storage can be practical.
So we were surprised to hear that storage giant Seagate had entered into a collaboration with a DNA-based storage company called Catalog. To find out how close the company’s technology is to being useful, we talked to Catalog’s CEO, Hyunjun Park indicated that Catalog’s approach is counterintuitive on two levels: It doesn’t store data the way you’d expect, and it isn’t focusing on archival storage at all.