Toggle light / dark theme

Evaluating AI and machine learning models in cheminformatics: benchmarking techniques and case studies

We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.

By accepting optional cookies, you consent to the processing of your personal data — including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.

See our privacy policy for more information on the use of your personal data.

AI-powered microscope predicts and tracks protein aggregation linked to brain diseases

The accumulation of misfolded proteins in the brain is central to the progression of neurodegenerative diseases like Huntington’s, Alzheimer’s and Parkinson’s. But to the human eye, proteins that are destined to form harmful aggregates don’t look any different than normal proteins.

The formation of such aggregates also tends to happen randomly and relatively rapidly—on the scale of minutes. The ability to identify and characterize protein aggregates is essential for understanding and fighting neurodegenerative diseases.

Now, using deep learning, EPFL researchers have developed a ‘self-driving’ imaging system that leverages multiple microscopy methods to track and analyze protein aggregation in real time—and even anticipate it before it begins. In addition to maximizing imaging efficiency, the approach minimizes the use of fluorescent labels, which can alter the biophysical properties of cell samples and impede accurate analysis.

Technology Landscape Review of In-Sensor Photonic Intelligence: From Optical Sensors to Smart Devices

Optical sensors have undergone significant evolution, transitioning from discrete optical microsystems toward sophisticated photonic integrated circuits (PICs) that leverage artificial intelligence (AI) for enhanced functionality. This review systematically explores the integration of optical sensing technologies with AI, charting the advancement from conventional optical microsystems to AI-driven smart devices. First, we examine classical optical sensing methodologies, including refractive index sensing, surface-enhanced infrared absorption (SEIRA), surface-enhanced Raman spectroscopy (SERS), surface plasmon-enhanced chiral spectroscopy, and surface-enhanced fluorescence (SEF) spectroscopy, highlighting their principles, capabilities, and limitations. Subsequently, we analyze the architecture of PIC-based sensing platforms, emphasizing their miniaturization, scalability, and real-time detection performance. This review then introduces the emerging paradigm of in-sensor computing, where AI algorithms are integrated directly within photonic devices, enabling real-time data processing, decision making, and enhanced system autonomy. Finally, we offer a comprehensive outlook on current technological challenges and future research directions, addressing integration complexity, material compatibility, and data processing bottlenecks. This review provides timely insights into the transformative potential of AI-enhanced PIC sensors, setting the stage for future innovations in autonomous, intelligent sensing applications.

Will implantable brain-computer interfaces soon benefit people with motor impairments?

A review published in Advanced Science highlights the evolution of research related to implantable brain-computer interfaces (iBCIs), which decode brain signals that are then translated into commands for external devices to potentially benefit individuals with impairments such as loss of limb function or speech.

A comprehensive systematic review identified 112 studies, nearly half of which have been published since 2020. Eighty iBCI participants were identified, mostly participating in studies concentrated in the United States, but with growing numbers of studies from Europe, China, and Australia.

The analysis revealed that iBCI technologies are being used to control devices such as robotic prosthetic limbs and consumer .

ChatGPT now handles 1.7 million requests every minute

OpenAI recently told Axios that their AI tool ChatGPT handles over 2.5 billion user instructions every single day. That’s the equivalent of about 1.7 million instructions per minute or 29,000 per second.

This is a stark increase from December 2024, when ChatGPT was handling about 1 billion messages per day. Having launched in November 2022, it’s become one of the fastest growing consumer apps of all time.

Researchers discover how the human brain organizes its visual memories through precise neural timing

Researchers at the University of Southern California have made a significant breakthrough in understanding how the human brain forms, stores and recalls visual memories. A new study, published in Advanced Science, harnesses human patient brain recordings and a powerful machine learning model to shed new light on the brain’s internal code that sorts memories of objects into categories—think of it as the brain’s filing cabinet of imagery.

The results demonstrated that the research team could essentially read subjects’ minds, by pinpointing the category of visual image being recalled, purely from the precise timing of the subject’s .

The work solves a fundamental neuroscience debate and offers exciting potential for future brain-computer interfaces, including memory prostheses to restore lost memory in patients with neurological disorders like dementia.

Tesla Autonomy Is AI’s Crowning Jewel; Diner Goes World Wide; Japan Trade Deal Announced

Questions to inspire discussion.

⚡ Q: What advantages does XAI’s proprietary cluster offer? A: XAI’s proprietary clusters, designed specifically for training, are uncatchable by competitors as they can’t be bought with money, creating an unbreachable moat in AI development.

Tesla’s Autonomy and Robotaxis.

🚗 Q: When is Tesla expected to launch unsupervised FSD? A: Tesla is expected to launch unsupervised FSD in the third quarter after polishing and testing, with version 14 potentially being unsupervised even if not allowed for public use.

🤖 Q: What is the significance of Tesla’s upcoming robotaxi launch? A: Tesla’s robotaxi launch is anticipated to be a historic moment, demonstrating that the complexity of autonomous driving technology has been overcome, allowing for leverage and scaling.

💰 Q: How might Tesla monetize its Autonomy feature? A: Tesla may charge monthly fees of $50-$100 for unsupervised use, including insurance, on top of personal insurance costs.

/* */