Toggle light / dark theme

Chips could harvest their own energy using a newly-created alloy

Why it matters: Electronic devices, which encompass anything from mobile phones to data centers, are notorious energy hogs. One solution could be to harness their heat directly to create a technique for on-chip energy harvesting. The problem has been that none of the few materials able to do this is compatible with current technology in semiconductor fabrication plants. Now, researchers from across Europe have created a germanium-tin alloy that can convert computer processors’ waste heat back into electricity.

A research collaboration in Europe has created a new alloy of silicon, germanium, and tin that can convert waste heat from computer processors back into electricity. It is a significant breakthrough in the development of materials for on-chip energy harvesting, which could lead to more energy-efficient and sustainable electronic devices. Essentially, by adding tin to germanium, the material’s thermal conductivity has been significantly reduced while still maintaining its electrical properties, making it ideal for thermoelectric applications.

The researchers are from Forschungszentrum Jülich and IHP – Leibniz Institute for High Performance Microelectronics in Germany, the University of Pisa, the University of Bologna in Italy, and the University of Leeds in the UK. Their findings made it onto the cover of the scientific journal ACS Applied Energy Materials.

Daniel Dennett on the Mind as Computer

King Philosophy is a global organisation dedicated to developing emotional intelligence, both through our YouTube channel and our real-life school located on 10 campuses around the world. We apply psychology, philosophy, and culture to everyday life, addressing the questions we’re never taught enough about at regular school or college: How can relationships go well? What is meaningful work? How can love last? How can one find calm? What has gone wrong (and right) with capitalism? We love the humanities, especially philosophy, psychotherapy, literature and art — always going to them in search of ideas that are thought-provoking, useful and consoling.

We’re about wisdom, emotional intelligence and self-understanding.

Please subscribe us more…
/ @kingphilosophy775

Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time

One of the variables in TD algorithms is called reward prediction error (RPE), which is the difference between the discounted predicted reward at the current state and the discounted predicted reward plus the actual reward at the next state. TD learning theory gained traction in neuroscience once it was demonstrated that firing patterns of dopaminergic neurons in the ventral tegmental area (VTA) during reinforcement learning resemble RPE5,9,10.

Implementations of TD using computer algorithms are straightforward, but are more complex when they are mapped onto plausible neural machinery11,12,13. Current implementations of neural TD assume a set of temporal basis-functions13,14, which are activated by external cues. For this assumption to hold, each possible external cue must activate a separate set of basis-functions, and these basis-functions must tile all possible learnable intervals between stimulus and reward.

In this paper, we argue that these assumptions are unscalable and therefore implausible from a fundamental conceptual level, and demonstrate that some predictions of such algorithms are inconsistent with various established experimental results. Instead, we propose that temporal basis functions used by the brain are themselves learned. We call this theoretical framework: Flexibly Learned Errors in Expected Reward, or FLEX for short. We also propose a biophysically plausible implementation of FLEX, as a proof-of-concept model. We show that key predictions of this model are consistent with actual experimental results but are inconsistent with some key predictions of the TD theory.

/* */