Toggle light / dark theme

Cocoa extract fails to prevent age-related vision loss, clinical trial finds

Brigham and Women’s Hospital-led research reports no significant long-term benefit of cocoa flavanol supplementation in preventing age-related macular degeneration (AMD). The paper is published in the journal JAMA Ophthalmology.

AMD is a progressive retinal disease and the most common cause of severe vision loss in adults over age 50. AMD damages the macula, the central part of the retina responsible for sharp, detailed vision. While peripheral sight is typically preserved, central vision loss can impair reading, driving, facial recognition, and other quality of life tasks. Abnormalities of blood flow in the eye are associated with the occurrence of AMD.

Cocoa flavanols are a group of naturally occurring plant compounds classified as flavonoids, found primarily in the cocoa bean. These bioactive compounds have been studied for their vascular effects, including improved endothelial function and enhanced nitric oxide production, which contribute to vasodilation and circulatory health. Previous trials have shown that moderate intake of may , improve lipid profiles, and reduce markers of inflammation, suggesting a role in mitigating cardiovascular and related vascular conditions.

Taking a responsible path to AGI

We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.

Artificial general intelligence (AGI), AI that’s at least as capable as humans at most cognitive tasks, could be here within the coming years.

Integrated with agentic capabilities, AGI could supercharge AI to understand, reason, plan, and execute actions autonomously. Such technological advancement will provide society with invaluable tools to address critical global challenges, including drug discovery, economic growth and climate change.

The Fluid Architecture of Cognitive Possibility

This article isn’t about whether AI is conscious. It’s about how it behaves—or, more precisely, how it performs something that resembles thinking within a completely different geometric, structural, and temporal reality. It’s a phenomenon we’ve yet to fully name, but we can begin to describe it—not as a function of symbolic logic or linear deduction, but as something more amorphous, more dynamic. Something I call the fluid architecture of cognitive possibility.

Traditional human thought is sequential. We move from premise to conclusion, symbol to symbol, with language as the scaffolding of cognition. We think in lines. We reason in steps. And it feels good—there’s comfort in the clarity of structure, in the rhythm of deduction.

But LLMs don’t think that way.

MIT introduced a smart assistant for LLM

Large language models (LLMs) show promise in tackling planning problems, but there’s a balance between flexibility and complexity. While LLMs can act as zero-shot planners, they struggle with complex tasks involving multiple constraints or long-term goals.

- Advertisement -

Many frameworks that address these challenges require task-specific preparation, such as tailored examples and predefined validators, which limits their ability to adapt to different tasks.

/* */