Toggle light / dark theme

The Signals in Your Brain that Tell You When It’s Time to Move

A new study, published in “Nature Communications” this week, led by Jake Gavenas PhD, while he was a PhD student at the Brain Institute at Chapman University, and co-authored by two faculty members of the Brain Institute, Uri Maoz and Aaron Schurger, examines how the brain initiates spontaneous actions. In addition to demonstrating how spontaneous action emerges without environmental input, this study has implications for the origins of slow ramping of neural activity before movement onset—a commonly-observed but poorly understood phenomenon.

In their study, Gavenas and colleagues propose an answer to that question. They simulated spontaneous activity in simple neural networks and compared this simulated activity to intracortical recordings of humans when they moved spontaneously. The study results suggest something striking: many rapidly fluctuating neurons can interact in a network to give rise to very slow fluctuations at the level of the population.

Imagine, for example, standing atop a high-dive platform and trying to summon the willpower to jump. Nothing in the outside world tells you when to jump; that decision comes from within. At some point you experience deciding to jump and then you jump. In the background, your brain (or, more specifically, your motor cortex) sends electrical signals that cause carefully coordinated muscle contractions across your body, resulting in you running and jumping. But where in the brain do these signals originate, and how do they relate to the conscious experience of willing your body to move?

AI-Assisted Police Reports and the Challenge of Generative Suspicion

This article delves into a transformative shift in the criminal justice system brought on by the use of AI-assisted police reports.


Police reports play a central role in the criminal justice system. Many times, police reports exist as the only official memorialization of what happened during an incident, shaping probable cause determinations, pretrial detention decisions, motions to suppress, plea bargains, and trial strategy. For over a century, human police officers wrote the factual narratives that shaped the trajectory of individual cases and organized the entire legal system.

All that is about to change with the creation of AI-assisted police reports. Today, with the click of a button, generative AI Large Language Models (LLMS) using predictive text capabilities can turn the audio feed of a police-worn body camera into a pre-written draft police report. Police officers then fill-in-the blanks of inserts and details like a “Mad Libs” of suspicion and submit the edited version as the official narrative of an incident.

From the police perspective, AI-assisted police reports offer clear cost savings and efficiencies from dreaded paperwork. From the technology perspective, ChatGPT and similar generative AI models have shown that LLMs are good at predictive text prompts in structured settings, exactly the use case of police reports. But hard technological, theoretical, and practice questions have emerged about how generative AI might infect a foundational building block of the criminal legal system.

DARPA Robotic Satellite Servicing

NASA and the Defense Advanced Research Projects Agency (DARPA) have signed an interagency agreement to collaborate on a satellite servicing demonstration in geosynchronous Earth orbit, where hundreds of satellites provide communications, meteorological, national security, and other vital functions.

Under this agreement, NASA will provide subject matter expertise to DARPA’s Robotic Servicing of Geosynchronous Satellites (RSGS) program to help complete the technology development, integration, testing, and demonstration. The RSGS servicing spacecraft will advance in-orbit satellite inspection, repair, and upgrade capabilities.

CEVA & CERN: Where Edge AI and Particle Physics Intersect

About 63% of the world population access the internet [Source: Statista] and a majority of them experience the internet through webpages. As such, the general population refers to the internet and the web pages interchangeably. Of course, those in the technology arena do know the difference but may or may not remember when and where the world wide web (WWW) was invented. Without its invention, the internet experience of today will not be the same.

100% of all living creatures experience something automatically and that is their “mass”, interchangeably and inaccurately referred to as “weight” by the general population. Of course, those who remember their physics know the difference. While material mass is taken for granted in general physics, there is a field of physics that tries to explain what gives materials their mass. The existence of the mass-giving field was confirmed when the Higgs boson particle was discovered.

The organization that is behind both the WWW invention and the Higgs boson discovery and many other remarkable inventions is CERN. The World Wide Web was invented in 1989 by Tim Berners-Lee while working at CERN. The existence of the mass-giving field was confirmed in 2012, when the Higgs boson particle was discovered at CERN.

New machine learning model developed to prevent EV battery fires

Researchers use AI and models to improve EV battery safety:


One of the electric vehicles’ most critical safety concerns is keeping their batteries cool, as temperature spikes can lead to dangerous consequences.

New research led by a University of Arizona doctoral student proposes a way to predict and prevent temperature spikes in the lithium-ion batteries commonly used to power such vehicles.

The paper “Advancing Battery Safety,” led by College of Engineering doctoral student Basab Goswami, is published in the Journal of Power Sources.

Seeing like a butterfly: Optical invention enhances camera capabilities

Butterflies can see more of the world than humans, including more colors and the field oscillation direction, or polarization, of light. This special ability enables them to navigate with precision, forage for food and communicate with one another. Other species, like the mantis shrimp, can sense an even wider spectrum of light, as well as the circular polarization, or spinning states, of light waves. They use this capability to signal a “love code,” which helps them find and be discovered by mates.

Inspired by these abilities in the animal kingdom, a team of researchers at the Penn State College of Engineering has developed an ultrathin optical element known as a metasurface, which can attach to a conventional camera and encode the spectral and polarization data of images captured in a snapshot or video through tiny, antenna-like nanostructures that tailor light properties. A machine learning framework, also developed by the team, then decodes this multi-dimensional visual information in real-time on a standard laptop.

The researchers have published their work in Science Advances.