Miki, A., Hasegawa, S., Yuzaki, S. et al. Exploring the proprioceptive potential of joint receptors using a biomimetic robotic joint. Sci Rep 16, 4,724 (2026). https://doi.org/10.1038/s41598-025-27311-3
Miki, A., Hasegawa, S., Yuzaki, S. et al. Exploring the proprioceptive potential of joint receptors using a biomimetic robotic joint. Sci Rep 16, 4,724 (2026). https://doi.org/10.1038/s41598-025-27311-3
Applied Intuition’s cofounders are building software that can drive everything from planes to tanks to automobiles. But to expand beyond its $800 million business selling tech for cars, they will have to take on Tesla, Google, Nvidia and a host of other startups jostling for pole position in the autonomy race.
Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA’s AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as “a cat under the table” or “a chair to the right of the table,” frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.
The new method, called Learn-to-Steer, works by analyzing the internal attention patterns of an image-generation model, effectively offering insight into how the model organizes objects in space. A lightweight classifier then subtly guides the model’s internal processes during image creation, helping it place objects more precisely according to user instructions. The approach can be applied to any existing trained model, eliminating the need for costly retraining.
The results show substantial performance gains. In the Stable Diffusion SD2.1 model, accuracy in understanding spatial relationships increased from 7% to 54%. In the Flux.1 model, success rates improved from 20% to 61%, with no negative impact on the models’ overall capabilities.
Perhaps our last line of defense.
Philosophical Studies — The ability to suffer, in the case of artificial entities, is often viewed as a moral turning point—once detected, there is no going back, and the moral landscape is irreversibly altered. The presence of entities capable of suffering imposes moral and legal obligations on humans. It is therefore unsurprising that many have urged caution in pursuing artificial suffering, with some even proposing a moratorium. In this paper, however, I argue that the emergence of artificial suffering need not entail moral disaster. On the contrary, I defend its development and contend that it may be a necessary feature of superintelligent robots. I suggest that artificial suffering could be essential for enabling human-like ethics in machines, bridging the retribution gap, and functioning as a control mechanism to mitigate existential risks. Rather than constraining research in this area, I maintain that work on artificial suffering should be actively intensified.
We introduce Layered Self-Supervised Knowledge Distillation (LSSKD) framework for training compact deep learning models. Unlike traditional methods that rely on pre-trained teacher networks, our approach appends auxiliary classifiers to intermediate feature maps, generating diverse self-supervised knowledge and enabling one-to-one transfer across different network stages. Our method achieves an average improvement of 4.54\% over the state-of-the-art PS-KD method and a 1.14% gain over SSKD on CIFAR-100, with a 0.32% improvement on ImageNet compared to HASSKD. Experiments on Tiny ImageNet and CIFAR-100 under few-shot learning scenarios also achieve state-of-the-art results. These findings demonstrate the effectiveness of our approach in enhancing model generalization and performance without the need for large over-parameterized teacher networks. Importantly, at the inference stage, all auxiliary classifiers can be removed, yielding no extra computational cost. This makes our model suitable for deploying small language models on affordable low-computing devices. Owing to its lightweight design and adaptability, our framework is particularly suitable for multimodal sensing and cyber-physical environments that require efficient and responsive inference. LSSKD facilitates the development of intelligent agents capable of learning from limited sensory data under weak supervision.
A robot butler sounds like a nice idea, but the technology has its drawbacks.
Johns Hopkins scientists say they have used 3D imaging, special microscopes and artificial intelligence (AI) programs to construct new maps of mouse brains showing a precise location of more than 10 million cells called oligodendrocytes. These cells form myelin, a protective sleeve around nerve cell axons, which speeds transmission of electrical signals and support brain health.
Published online Feb. 18 in Cell and funded by the National Institutes of Health, the maps not only paint a whole-brain picture of how myelin content varies between brain circuits, but also provide insights into how the loss of such cells impacts human diseases such as multiple sclerosis, Alzheimer’s disease and other disorders that affect learning, memory, sensory ability and movement, say the researchers. Although mouse and human brains are not the same, they share many characteristics and most biological processes.
“Our study identifies not only the location of oligodendrocytes in the brain, but also integrates information about gene expression and the structural features of neurons,” says Dwight Bergles, Ph.D., the Diana Sylvestre and Charles Homcy Professor in the Department of Neuroscience at the Johns Hopkins University School of Medicine. “It’s like mapping the location of all the trees in a forest, but also adding information about soil quality, weather and geology to understand the forest ecosystem.”