Toggle light / dark theme

One of the variables in TD algorithms is called reward prediction error (RPE), which is the difference between the discounted predicted reward at the current state and the discounted predicted reward plus the actual reward at the next state. TD learning theory gained traction in neuroscience once it was demonstrated that firing patterns of dopaminergic neurons in the ventral tegmental area (VTA) during reinforcement learning resemble RPE5,9,10.

Implementations of TD using computer algorithms are straightforward, but are more complex when they are mapped onto plausible neural machinery11,12,13. Current implementations of neural TD assume a set of temporal basis-functions13,14, which are activated by external cues. For this assumption to hold, each possible external cue must activate a separate set of basis-functions, and these basis-functions must tile all possible learnable intervals between stimulus and reward.

In this paper, we argue that these assumptions are unscalable and therefore implausible from a fundamental conceptual level, and demonstrate that some predictions of such algorithms are inconsistent with various established experimental results. Instead, we propose that temporal basis functions used by the brain are themselves learned. We call this theoretical framework: Flexibly Learned Errors in Expected Reward, or FLEX for short. We also propose a biophysically plausible implementation of FLEX, as a proof-of-concept model. We show that key predictions of this model are consistent with actual experimental results but are inconsistent with some key predictions of the TD theory.

The performance of artificial intelligence (AI) tools, including large computational models for natural language processing (NLP) and computer vision algorithms, has been rapidly improving over the past decades. One reason for this is that datasets to train these algorithms have exponentially grown, collecting hundreds of thousands of images and texts often collected from the internet.

Training data for robot control and planning algorithms, on the other hand, remains far less abundant, in part because acquiring it is not as straightforward. Some computer scientists have thus been trying to create larger datasets and platforms that could be used to train computational models for a wide range of robotics applications.

In a recent paper, pre-published on the server arXiv and set to be presented at the Robotics: Science and Systems 2024 conference, researchers at the University of Texas at Austin and NVIDIA Research introduced one of these platforms, called RoboCasa.

The concept of traveling beyond the speed of light has been given theoretical grounding through the Alcubierre warp drive. Proposed by physicist Miguel Alcubierre in the 1990s, this warp drive involves creating a space-time bubble around a spacecraft. By contracting space in front of the spacecraft and expanding it behind, the ship can ride a wave of space-time, seemingly achieving faster-than-light travel without breaking the cosmic speed limit. In essence, it’s the space around the ship that moves, not the ship itself, allowing for rapid traversal of vast distances.

The theoretical feasibility of the Alcubierre warp drive hinges on generating an immense amount of energy, currently beyond our technological capabilities. The ship’s warp core, similar to a nuclear reactor, would utilize matter and antimatter collisions to produce the necessary energy for warping space. While this concept was initially fictional, Alcubierre proposed a solution to Einstein’s Field Equation that aligned with the principles of the Star Trek Warp Drive.

NASA has recently developed a model of Alcubierre Warp Drive. Ongoing developments and models inspired by the Alcubierre Drive suggest that interstellar travel might not be confined to science fiction in the distant future.

The Alcubierre Warp Drive presents an intriguing approach to faster-than-light travel by manipulating spacetime. Unlike traditional propulsion, this theoretical model involves compressing spacetime in front of a spacecraft and expanding it behind, creating a \.

Restraining or slowing ageing hallmarks at the cellular level have been proposed as a route to increased organismal lifespan and healthspan. Consequently, there is great interest in anti-ageing drug discovery. However, this currently requires laborious and lengthy longevity analysis. Here, we present a novel screening readout for the expedited discovery of compounds that restrain ageing of cell populations in vitro and enable extension of in vivo lifespan.

Using Illumina methylation arrays, we monitored DNA methylation changes accompanying long-term passaging of adult primary human cells in culture. This enabled us to develop, test, and validate the CellPopAge Clock, an epigenetic clock with underlying algorithm, unique among existing epigenetic clocks for its design to detect anti-ageing compounds in vitro. Additionally, we measured markers of senescence and performed longevity experiments in vivo in Drosophila, to further validate our approach to discover novel anti-ageing compounds. Finally, we bench mark our epigenetic clock with other available epigenetic clocks to consolidate its usefulness and specialisation for primary cells in culture.

We developed a novel epigenetic clock, the CellPopAge Clock, to accurately monitor the age of a population of adult human primary cells. We find that the CellPopAge Clock can detect decelerated passage-based ageing of human primary cells treated with rapamycin or trametinib, well-established longevity drugs. We then utilise the CellPopAge Clock as a screening tool for the identification of compounds which decelerate ageing of cell populations, uncovering novel anti-ageing drugs, torin2 and dactolisib (BEZ-235). We demonstrate that delayed epigenetic ageing in human primary cells treated with anti-ageing compounds is accompanied by a reduction in senescence and ageing biomarkers. Finally, we extend our screening platform in vivo by taking advantage of a specially formulated holidic medium for increased drug bioavailability in Drosophila. We show that the novel anti-ageing drugs, torin2 and dactolisib (BEZ-235), increase longevity in vivo.

A research team has constructed a coherent superposition of quantum evolution with two opposite directions in a photonic system and confirmed its advantage in characterizing input-output indefiniteness. The study was published in Physical Review Letters.

The notion that time flows inexorably from the past to the future is deeply rooted in people’s mind. However, the laws of physics that govern the motion of objects in the microscopic world do not deliberately distinguish the direction of time.

To be more specific, the basic equations of motion of both classical and are reversible, and changing the direction of the time coordinate system of a dynamical process (possibly along with the direction of some other parameters) still constitutes a valid process.

Researchers have developed a genetic algorithm for designing phononic crystal nanostructures, significantly advancing quantum computing and communications.

The new method, validated through experiments, allows precise control of acoustic wave propagation, promising improvements in devices like smartphones and quantum computers.

Quantum Computing Revolution

Robotic Autonomy in Complex Environments with Resiliency (RACER) program successfully tested autonomous movement on a new, much larger fleet vehicle – a significant step in scaling up the adaptability and capability of the underlying RACER algorithms.

The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.

“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.

A research team from Japan, including scientists from Hitachi, Ltd. (TSE 6,501, Hitachi), Kyushu University, RIKEN, and HREM Research Inc. (HREM), has achieved a major breakthrough in the observation of magnetic fields at unimaginably small scales.

In collaboration with National Institute of Advanced Industrial Science and Technology (AIST) and the National Institute for Materials Science (NIMS), the team used Hitachi’s atomic-resolution holography electron microscope—with a newly developed image acquisition technology and defocus correction algorithms—to visualize the magnetic fields of individual atomic layers within a crystalline solid.

Many advances in , catalysis, transportation, and have been made possible by the development and adoption of high-performance materials with tailored characteristics. Atom arrangement and electron behavior are among the most critical factors that dictate a crystalline material’s properties.

Conventional encryption methods rely on complex mathematical algorithms and the limits of current computing power. However, with the rise of quantum computers, these methods are becoming increasingly vulnerable, necessitating quantum key distribution (QKD).

QKD is a technology that leverages the unique properties of quantum physics to secure data transmission. This method has been continuously optimized over the years, but establishing large networks has been challenging due to the limitations of existing quantum light sources.

In a new article published in Light: Science & Applications, a team of scientists in Germany have achieved the first intercity QKD experiment with a deterministic single-photon source, revolutionizing how we protect our confidential information from cyber threats.