Toggle light / dark theme

One of the variables in TD algorithms is called reward prediction error (RPE), which is the difference between the discounted predicted reward at the current state and the discounted predicted reward plus the actual reward at the next state. TD learning theory gained traction in neuroscience once it was demonstrated that firing patterns of dopaminergic neurons in the ventral tegmental area (VTA) during reinforcement learning resemble RPE5,9,10.

Implementations of TD using computer algorithms are straightforward, but are more complex when they are mapped onto plausible neural machinery11,12,13. Current implementations of neural TD assume a set of temporal basis-functions13,14, which are activated by external cues. For this assumption to hold, each possible external cue must activate a separate set of basis-functions, and these basis-functions must tile all possible learnable intervals between stimulus and reward.

In this paper, we argue that these assumptions are unscalable and therefore implausible from a fundamental conceptual level, and demonstrate that some predictions of such algorithms are inconsistent with various established experimental results. Instead, we propose that temporal basis functions used by the brain are themselves learned. We call this theoretical framework: Flexibly Learned Errors in Expected Reward, or FLEX for short. We also propose a biophysically plausible implementation of FLEX, as a proof-of-concept model. We show that key predictions of this model are consistent with actual experimental results but are inconsistent with some key predictions of the TD theory.

A new superconducting compound offers a bridge to more practicals with a potentially attractive range of applications, according to new research. And the new material’s strange magnetic behavior recalls classics of decades ago—but this time in a material that’s already demonstrated its near-room-temperature bona fides.

Lanthanum hydrides—which combine atoms of the rare earth metal lanthanum with atoms of hydrogen—contain a range of superconducting materials of varying properties. One noteworthy material is lanthanum decahydride (LaH10), which boasts the world’s highest accepted superconducting transition temperature, at −23 °C. (The catch is that to achieve this feat, lanthanum decahydride must be subjected to 200 billion pascals of pressure.)

Now a different lanthanum hydride (La4H23) has revealed similar if not quite equally impressive superconductivity stats. (Its transition temperature is −168 °C at 122 billion Pa.) However, the new lanthanum hydride also has revealingly peculiar magnetic properties that suggest an unexpected family resemblance to the superstar of the superconductivity world, cuprates.

World’s smallest violin for AI execs.


Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models.

And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry.

“There is a serious bottleneck here,” AI researcher Tamay Besiroglu, lead author of a new paper to be presented at a conference this summer, told the Associated Press. “If you start hitting those constraints about how much data you have, then you can’t really scale up your models efficiently anymore.”

To put the forecasted demand into context, consider this: A recent MIT study found that a single data center consumes electricity equivalent to 50,000 homes. Estimates indicate that Microsoft, Amazon, and Google operate about 600 data centers in the U.S. today…

Arguments exist that by 2030, 80% of renewable power sources will fulfill electricity demand. For reference, the U.S. generated roughly 240 billion kilowatt hours of solar and 425 billion kilowatt hours of wind, totaling 665 billion kilowatt hours in 2023. Assuming a 50/50 split between wind and solar, that scenario implies that, to satisfy the U.S. electricity demand that adequately facilitates AI competitiveness, wind and solar will have to generate approximately 3.4 trillion kilowatt hours of electricity each. That is more than a ten-fold increase over the next five years. The EIA highlights that the U.S. planned utility-scale electric-generating capacity addition in 2023 included 29 million kilowatts of solar (54% of the total) and 6 million kilowatts of wind (11% of the total), which pales in comparison to the estimated amount required.

It just doesn’t get much more clear; renewables cannot begin to supply the energy needed for AI data centers. Only fossil fuels and nuclear can get the job done. It’s that simple. AI is not something global elites are going to let slide by. They’ve had a good run with the Big Green Grift but those days are going to gradually (maybe not so gradually) come to an end, as the demand for energy to power AI forces a reconsideration.

Can you pass me the whatchamacallit? It’s right over there next to the thingamajig.

Many of us will experience “lethologica”, or difficulty finding words, in everyday life. And it usually becomes more prominent with age.

Frequent difficulty finding the right word can signal changes in the brain consistent with the early (“preclinical”) stages of Alzheimer’s disease – before more obvious symptoms emerge.

The Belle II experiment is a large research effort aimed at precisely measuring weak-interaction parameters, studying exotic hadrons (i.e., a class of subatomic particles) and searching for new physical phenomena. This effort primarily relies on the analysis of data collected by the Belle II detector (i.e., a general purpose spectrometer) and delivered by the SuperKEKB, a particle collider, both located at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan.