Toggle light / dark theme

Synchronicity!😉 Just a few hours ago I watched a video which stated that the philosopher Henri Bergson argued our linear perception of time limited our ability to appreciate the relationship between time and consciousness.


What if our understanding of time as a linear sequence of events is merely an illusion created by the brain’s processing of reality? Could time itself be an emergent phenomenon, arising from the complex interplay of quantum mechanics, relativity, and consciousness? How might the brain’s multidimensional computations, reflecting patterns found in the universe, reveal a deeper connection between mind and cosmos? Could Quantum AI and Reversible Quantum Computing provide the tools to simulate, manipulate, and even reshape the flow of time, offering practical applications of D-Theory that bridge the gap between theoretical physics and transformative technologies? These profound questions lie at the heart of Temporal Mechanics: D-Theory as a Critical Upgrade to Our Understanding of the Nature of Time, 2025 paper and book by Alex M. Vikoulov. D-Theory, also referred to as Quantum Temporal Mechanics, Digital Presentism, and D-Series, challenges conventional views of time as a fixed, universal backdrop to reality and instead redefines it as a dynamic interplay between the mind and the cosmos.

Time, as experienced by humans, is more than a sequence of events dictated by physical laws. It emerges from our awareness of change, a psychological construct shaped by consciousness. Recent advancements in neuroscience, quantum physics, and cognitive science reveal fascinating parallels between the brain and the universe. Studies suggest that neural processes operate in up to 11 dimensions, echoing M-Theory’s depiction of a multiverse with similar dimensionality. These insights hint at a profound structural resemblance, where the brain and the cosmos mirror each other as interconnected systems of information processing.

Universal transformer memory optimizes prompts using neural attention memory models (NAMMs), simple neural networks that decide whether to “remember” or “forget” each given token stored in the LLM’s memory.

“This new capability allows Transformers to discard unhelpful or redundant details, and focus on the most critical information, something we find to be crucial for tasks requiring long-context reasoning,” the researchers write.

NAMMs are trained separately from the LLM and are combined with the pre-trained model at inference time, which makes them flexible and easy to deploy. However, they need access to the inner activations of the model, which means they can only be applied to open-source models.

Super saturday 1: productive day at the OEC!

Our inaugural Super Saturday session kicked off on a high note! Emmanuel showcased his handyman skills by expertly fixing two fluctuating lights at the Ogba Educational Clinic (OEC).

Special thanks to Mr. Kevin for his support in purchasing the necessary parts, including the choke, which made the repair possible.

Re grateful for the dedication and teamwork displayed by Emmanuel and Mr. Kevin. Their efforts have ensured a safer and more conducive learning environment for our students. +#buildingthefuturewithai #therobotarecoming #STEM


Generative artificial intelligence (AI) presents myriad opportunities for integrity actors—anti-corruption agencies, supreme audit institutions, internal audit bodies and others—to enhance the impact of their work, particularly through the use of large language models (LLMS). As this type of AI becomes increasingly mainstream, it is critical for integrity actors to understand both where generative AI and LLMs can add the most value and the risks they pose. To advance this understanding, this paper draws on input from the OECD integrity and anti-corruption communities and provides a snapshot of the ways these bodies are using generative AI and LLMs, the challenges they face, and the insights these experiences offer to similar bodies in other countries. The paper also explores key considerations for integrity actors to ensure trustworthy AI systems and responsible use of AI as their capacities in this area develop.

OpenAI today announced an improved version of its most capable artificial intelligence model to date—one that takes even more time to deliberate over questions—just a day after Google announced its first model of this type.

OpenAI’s new model, called o3, replaces o1, which the company introduced in September. Like o1, the new model spends time ruminating over a problem in order to deliver better answers to questions that require step-by-step logical reasoning. (OpenAI chose to skip the “o2” moniker because it’s already the name of a mobile carrier in the UK.)

“We view this as the beginning of the next phase of AI,” said OpenAI CEO Sam Altman on a livestream Friday. “Where you can use these models to do increasingly complex tasks that require a lot of reasoning.”