Toggle light / dark theme

Summary: Researchers have developed an AI-powered “electronic tongue” capable of distinguishing subtle differences in liquids, such as milk freshness, soda types, and coffee blends. By analyzing sensor data through a neural network, the device achieved over 95% accuracy in identifying liquid quality, authenticity, and potential safety issues. Interestingly, when the AI was allowed to select its own analysis parameters, it outperformed human-defined settings, showing how it holistically assessed subtle data.

This technology, which uses graphene-based sensors, could revolutionize food safety assessments and potentially extend to medical diagnostics. The device’s AI insights also provide a unique view into the neural network’s decision-making process. This innovation promises practical applications across industries where quality and safety are paramount.

University of Toronto professor Geoffrey Hinton has been awarded the Nobel Prize in Physics for his work in AI. Adrian Ghobrial has more.

Connect with CTV News:
For live updates and latest headlines visit: http://www.ctvnews.ca/
For breaking news, fast, download the CTV News App: https://www.ctvnews.ca/app.
Must-watch stories and full programs at http://www.ctvnews.ca/video.

CTV News on TikTok: https://www.tiktok.com/discover/CTV-News.
CTV News on X (formerly Twitter): / ctvnews.
CTV News on Reddit: / ctvnews.
CTV News on LinkedIn: / ctv-news.


CTV News is Canada’s most-watched news organization both locally and nationally, and has a network of national, international, and local news operations.

The US Department of Justice (DoJ) has submitted a new “Proposed Remedy Framework” to correct Google’s violation of antitrust antitrust laws in the country (h/t Mishaal Rahman). This framework seeks to remedy the harm caused by Google’s search distribution and revenue sharing, generation and display for search results, advertising scale and monetization, and accumulation and use of data.

The most drastic of the proposed solutions includes preventing Google from using its products, such as Chrome, Play, and Android, to advantage Google Search and related products. Other solutions include allowing websites to opt-out of training or appearing in Google-owned AI products, such as in AI Overviews in Google Search.

Google responded to this by asserting that “DOJ’s radical and sweeping proposals risk hurting consumers, businesses, and developers.” While the company intends to respond in detail to DoJ’s final proposals, it says that the DoJ is “already signaling requests that go far beyond the specific legal issues in this case.”

Researchers at the University Medical Center Göttingen (UMG), Germany, have developed a new method that makes it possible for the first time to image the three-dimensional shape of proteins with a conventional microscope. Combined with artificial intelligence, One-step Nanoscale Expansion (ONE) microscopy enables the detection of structural changes in damaged or toxic proteins in human samples. Diseases such as Parkinson’s disease, which are based on protein misfolding, could thus be detected and treated at an early stage.

ONE microscopy was named one of the “seven technologies to watch in 2024” by the journal Nature and was recently published in the renowned journal Nature Biotechnology (“One-step nanoscale expansion microscopy reveals individual protein shapes”).

Artistic impression of the first protein structure of the GABAA receptor solved by ONE microscopy. (Image: Shaib/Rizzoli, umg/mbexc)

In fusion experiments, understanding the behavior of the plasma, especially the ion temperature and rotation velocity, is essential. These two parameters play a critical role in the stability and performance of the plasma, making them vital for advancing fusion technology. However, quickly and accurately measuring these values has been a major technical challenge in operating fusion reactors at optimal levels.

Transformers have gained significant attention due to their powerful capabilities in understanding and generating human-like text, making them suitable for various applications like language translation, summarization, and creative content generation. They operate based on an attention mechanism, which determines how much focus each token in a sequence should have on others to make informed predictions. While they offer great promise, the challenge lies in optimizing these models to handle large amounts of data efficiently without excessive computational costs.

A significant challenge in developing transformer models is their inefficiency when handling long text sequences. As the context length increases, the computational and memory requirements grow exponentially. This happens because each token interacts with every other token in the sequence, leading to quadratic complexity that quickly becomes unmanageable. This limitation constrains the application of transformers in tasks that demand long contexts, such as language modeling and document summarization, where retaining and processing the entire sequence is crucial for maintaining context and coherence. Thus, solutions are needed to reduce the computational burden while retaining the model’s effectiveness.

Approaches to address this issue have included sparse attention mechanisms, which limit the number of interactions between tokens, and context compression techniques that reduce the sequence length by summarizing past information. These methods attempt to reduce the number of tokens considered in the attention mechanism but often do so at the cost of performance, as reducing context can lead to a loss of critical information. This trade-off between efficiency and performance has prompted researchers to explore new methods to maintain high accuracy while reducing computational and memory requirements.