Toggle light / dark theme

A collaborative research team from NIMS and Tokyo University of Science has successfully developed an artificial intelligence (AI) device that executes brain-like information processing through few-molecule reservoir computing. This innovation utilizes the molecular vibrations of a select number of organic molecules.

By applying this device for the blood glucose level prediction in patients with diabetes, it has significantly outperformed existing AI devices in terms of prediction accuracy.

The work is published in the journal Science Advances.

Telcos are a prime example of an enterprise that would benefit from adopting this hybrid AI model. They have a unique role, as they can be both consumers and providers. Similar scenarios may be applicable to healthcare, oil rigs, logistics companies and other industries. Are the telcos prepared to make good use of gen AI? We know they have a lot of data, but do they have a time-series model that fits the data?

When it comes to AI models, IBM has a multimodel strategy to accommodate each unique use case. Bigger is not always better, as specialized models outperform general-purpose models with lower infrastructure requirements.

Medically, AI is helping us with everything from identifying abnormal heart rhythms before they happen to spotting skin cancer. But do we really need it to get involved with our genome? Protein-design company Profluent believes we do.

Founded in 2022 in Berkeley, California, Profluent has been exploring ways to use AI to study and generate new proteins that aren’t found in nature. This week, the team trumpeted a major success with the release of an AI-derived protein termed OpenCRISPR-1.

The protein is meant to work in the CRISPR gene-editing system, a process in which a protein cuts open a piece of DNA and repairs or replaces a gene. CRISPR has been actively in use for about 15 years, with its creators bagging the Nobel prize in chemistry in 2020. It has shown promise as a biomedical tool that can do everything from restoring vision to combating rare diseases; as an agricultural tool that can improve the vitamin D content of tomatoes, and slash the flowering time of trees from decades to months; and much more.

Have you ever wondered how machine learning systems can improve their predictions over time, seemingly getting smarter with each new piece of data? This is not just a trait of all machine learning models but is particularly pronounced in Bayesian Machine Learning (BML), which stands apart for its ability to incorporate prior knowledge and uncertainty into its learning process. This article takes you on a deep dive into the world of BML, unraveling its concepts and methodologies, and showcasing its unique advantages, especially in scenarios where data is scarce or noisy.

Note that Bayesian Machine Learning goes hand-in-hand with the concept of Probabilistic Models. To discover more about Probabilistic Models in Machine Learning, click here.

Bayesian Machine Learning (BML) represents a sophisticated paradigm in the field of artificial intelligence, one that marries the power of statistical inference with machine learning. Unlike traditional machine learning, which primarily focuses on predictions, BML introduces the concept of probability and inference, offering a framework where learning evolves with the accumulation of evidence.