Experts disclosed a Reprompt attack that allowed single-click data exfiltration from Microsoft Copilot via indirect prompt injection, now fixed by MS.
Google has confirmed that it’s now possible to change your @gmail.com address. This means that if your current email is [email protected], you can now change it to [email protected].
The Gootloader malware, typically used for initial access, is now using a malformed ZIP archive designed to evade detection by concatenating up to 1,000 archives.
In doing so, the malware, which is an archived JScript file, causes many tools to crash when trying to analyze it.
According to researchers, the malicious file is successfully unpacked using the default utility in Windows, but tools relying on 7-Zip and WinRAR fail.
Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
Researchers from Kyushu University discovered a previously unrecognized synaptic “hotspot” that forms during adolescence, challenging the long-held view that adolescent brain development was dominated by synaptic pruning. This hotspot fails to form in mice carrying a schizophrenia-associated gene, pointing to a potential link between adolescent synaptic formation and psychiatric disorders, including schizophrenia.
Adolescence marks an important transition not just socially and physically, but neurologically. During this period, higher cognitive functions such as planning, problem-solving, and decision-making gradually mature. Yet, the underlying mechanisms of neural circuit development remain poorly understood.
Key to this process are synapses—the functional connections between neurons allow information to flow through the brain. Previously, it has long been hypothesized that synapse numbers increase during childhood and then decrease during adolescence. It has also been proposed that excessive “synaptic pruning,” a process that refines neural circuits by eliminating unused or weak connections, may lead to neuropsychiatric disorders. One example is schizophrenia, a condition characterized by hallucinations, delusions, or disorganized thinking.
The idea never died, progress is still being made.
Nanotechnology was once imagined as the next great technological revolution—atom-by-atom manufacturing, machines as small as cells, and materials we can only dream of today. Instead, it stalled. While AI, robotics, and nuclear surged ahead, nanotech faded into the background, reduced to buzzwords and sci-fi aesthetics.
But the idea never died.
We can manipulate matter at the atomic scale. We can design perfect materials. We can build molecular machines. What’s been missing isn’t physics—it’s ambition, investment, and the will to push beyond today’s tools.
In this interview with futurist J. Storrs Hall, we explore what nanotechnology really is, why it drifted off course, and why its future may finally be on the horizon. If AI was a “blue-sky fantasy” until suddenly it wasn’t, what happens when someone decides nanotech deserves the same surge of talent, money, and imagination?