Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

A logical calculus of the ideas immanent in nervous activity

Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.

Shapeshifting soft robot uses electric fields to swing like a gymnast

Researchers have invented a new super agile robot that can cleverly change shape thanks to amorphous characteristics akin to the popular Marvel anti-hero Venom.

The unique soft morphing creation, developed by the University of Bristol and Queen Mary University of London, is much more adaptable than current . The study, published in the journal Advanced Materials, showcases an electro-morphing gel jelly-like humanoid gymnast that can move from one place to another using its flexible body and limbs.

Researchers used a special material called electro-morphing gel (e-MG) which allows the robot to show shapeshifting functions, allowing them to bend, stretch, and move in ways that were previously difficult or impossible, through manipulation of electric fields from ultralightweight electrodes.

Disease-associated radial glia-like cells with epigenetically dysregulated interferon response in MS

Li et al. report that Edwardsiella piscicida employs HigA, an anti-toxin protein, to facilitate the diversion of tryptophan metabolism to the kynurenine pathway, rather than the serotonin pathway, by directly activating IDO1 in a T6SS-dependent manner as a cross-kingdom effector. The serotonin-level fluctuation modulates host intestinal histological damage and bacterial infection.

How To Track And Optimize Biomarkers: Blood Test #6 in 2025

Join us on Patreon! https://www.patreon.com/MichaelLustgartenPhD

Discount Links/Affiliates:
Blood testing (where I get the majority of my labs): https://www.ultalabtests.com/partners/michaellustgarten.

At-Home Metabolomics: https://www.iollo.com?ref=michael-lustgarten.
Use Code: CONQUERAGING At Checkout.

Clearly Filtered Water Filter: https://get.aspr.app/SHoPY

Epigenetic, Telomere Testing: https://trudiagnostic.com/?irclickid=U-s3Ii2r7xyIU-LSYLyQdQ6…M0&irgwc=1
Use Code: CONQUERAGING

NAD+ Quantification: https://www.jinfiniti.com/intracellular-nad-test/

Size doesn’t matter: Just a small number of malicious files can corrupt LLMs of any size

Large language models (LLMs), which power sophisticated AI chatbots, are more vulnerable than previously thought. According to research by Anthropic, the UK AI Security Institute and the Alan Turing Institute, it only takes 250 malicious documents to compromise even the largest models.

The vast majority of data used to train LLMs is scraped from the public internet. While this helps them to build knowledge and generate natural responses, it also puts them at risk from data poisoning attacks. It had been thought that as models grew, the risk was minimized because the percentage of poisoned data had to remain the same. In other words, it would need massive amounts of data to corrupt the largest models. But in this study, which is published on the arXiv preprint server, researchers showed that an attacker only needs a small number of poisoned documents to potentially wreak havoc.

To assess the ease of compromising large AI models, the researchers built several LLMs from scratch, ranging from small systems (600 million parameters) to very large (13 billion parameters). Each model was trained on vast amounts of clean public data, but the team inserted a fixed number of malicious files (100 to 500) into each one.

/* */