Toggle light / dark theme

Unpacking the modeling process for energy policy making

On top of this, the use of quantification has significantly increased over the last decades with the inflation of metrics, indicators, and scores to rank and benchmark options (Muller, 2018). The case of energy policy making in the European Union is again an effective example. The European Union’s recent energy strategy has been underpinned by the Clean Energy for all Europeans packages, which are in turn supported by a number of individual directives, each one characterized by a series of quantitative goals (European Commission, 2023). The quantification of the impact (impact assessment) is customarily required to successfully promote new political measures (European Commission, 2015a) and is in turn based on quantification, often from mathematical models (Saltelli et al., 2023). The emphasis on producing exact figures to assess the contribution of a new technology, political or economic measure has put many models and their users into contexts of decision-making that at times extends beyond their original intent (Saltelli, Bammer et al., 2020). At the same time, the efforts to retrospectively assess the performance of energy models have been extremely limited, one example being the Energy Modeling Forum in the United States (Huntington et al., 1982). In spite of this, retrospective assessments can be very helpful in understanding the sources of mismatch between a forecast and the actual figures reported a posteriori (Koomey et al., 2003). For example, long-range forecast models are typically based on the assumption of gradual structural changes, which are at stake with the disruptive events and discontinuities occurring in the real world (Craig et al., 2002). This dimension is especially important in terms of the nature and pace of technology change (Bistline et al., 2023 ; Weyant & Olavson, 1999). A further critical element in this approach is the cognitive bias in scenario analysis that naturally leads to overconfidence in the option being explored and results in an underestimate of the ranges of possible outcomes (Morgan & Keith, 2008).

Additionally, in their quest for capturing the features of the energy systems represented, models have increased their complicatedness and/or complexity. In this context, the need to appraise model uncertainty has become of paramount importance, especially considering the uncertainty due to propagation errors caused by model complexification (Puy et al., 2022). In ecology, this is known as the O’Neil conjecture, which posits a principle of decreasing returns for model complexity when uncertainties come to dominate the output (O’Neill, 1989 ; Turner & Gardner, 2015). Capturing and apportioning uncertainty is crucial for a healthy interaction at the science–policy interface, including energy policy making, because it promotes better informed decision-making. Yet Yue et al. (2018) found that only about 5% of the studies covering energy system optimization models have included some form of assessment of stochastic uncertainty, which is the part of uncertainty that can be fully quantified (Walker et al., 2003). When it comes to adequately apportioning this uncertainty onto the input parameters and hypotheses through sensitivity analysis, the situation is even more critical: Only very few papers in the energy field have made the use of state-of-the-art approaches (Lo Piano & Benini, 2022 ; Saltelli et al., 2019). Further to that, the epistemic part of uncertainty, the one that arises due to imperfect knowledge and problem framing, has been largely ignored in the energy modeling literature (Pye et al., 2018). For instance, important sources of uncertainties associated with regulatory lag and public acceptance have typically been overlooked. 1

In this contribution, we discuss three approaches to deal with the challenges of non-neutrality and uncertainty in models: The numerical unit spread assessment pedigree (NUSAP) method, diagnostic diagrams, and sensitivity auditing (SAUD). These challenges are especially critical when only one (set of) model(s) has been selected to contribute to decision-making. One practical case is used to showcase in retrospective the relevance of the issue and the associated problems: the International Institute for Applied Systems Analysis (IIASA) global modeling in the 1980s.

Unlocking Hypnosis: Stanford Enhances Brain Power With Neurostimulation

Stanford Medicine scientists used transcranial magnetic stimulation to temporarily enhance hypnotizability in patients with chronic pain, making them better candidates for hypnotherapy.

How deeply someone can be hypnotized — known as hypnotizability — appears to be a stable trait that changes little throughout adulthood, much like personality and IQ. But now, for the first time, Stanford Medicine researchers have demonstrated a way to temporarily heighten hypnotizablity — potentially allowing more people to access the benefits of hypnosis-based therapy.

In the new study, published on January 4 in Nature Mental Health, the researchers found that less than two minutes of electrical stimulation targeting a precise area of the brain could boost participants’ hypnotizability for about one hour.

The Attention Schema Theory: A Foundation for Engineering Artificial Consciousness

The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

This article is part of a special issue on consciousness in humanoid robots. The purpose of this article is to summarize the attention schema theory (AST) of consciousness for those in the engineering or artificial intelligence community who may not have encountered previous papers on the topic, which tended to be in psychology and neuroscience journals. The central claim of this article is that AST is mechanistic, demystifies consciousness and can potentially provide a foundation on which artificial consciousness could be engineered. The theory has been summarized in detail in other articles (e.g., Graziano and Kastner, 2011; Webb and Graziano, 2015) and has been described in depth in a book (Graziano, 2013). The goal here is to briefly introduce the theory to a potentially new audience and to emphasize its possible use for engineering artificial consciousness.

The AST was developed beginning in 2010, drawing on basic research in neuroscience, psychology, and especially on how the brain constructs models of the self (Graziano, 2010, 2013; Graziano and Kastner, 2011; Webb and Graziano, 2015). The main goal of this theory is to explain how the brain, a biological information processor, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is in the realm of science and engineering.

Why does depression cause difficulties with learning?

When learning, patients with schizophrenia or depression have difficulty making optimal use of information that is new to them. In the learning process, both groups of patients give greater weight to less important information and, as a result, make less than ideal decisions.

This was the finding of a several-months-long study conducted by a team led by neuroscientist Professor Dr. med. Markus Ullsperger from the Institute of Psychology at Otto von Guericke University Magdeburg in collaboration with colleagues from the University Clinic for Psychiatry & Psychotherapy and the German Center for Mental Health.

By using electroencephalography (EEG) and complex mathematical computer modeling, the team of researchers discovered that learning deficits in depressive and schizophrenic are caused by diminished/reduced flexibility in the use of new information.

New Neural Implant Unlocks Deep Brain Activity

Summary: Researchers create a transparent graphene-based neural implant offering high-resolution brain activity data from the surface. The implant’s dense array of tiny graphene electrodes enables simultaneous recording of electrical and calcium activity in deep brain layers.

This innovation overcomes previous implant limitations and offers insights for neuroscientific studies. The transparent design allows optical imaging alongside electrical recording, revolutionizing neuroscience research.

Researchers sequence the first genome of myxini, the only vertebrate lineage that had no reference genome

An international scientific team including more than 40 authors from seven different countries, led by a researcher at the University of Malaga Juan Pascual Anaya, has managed to sequence the first genome of the myxini, also known as hagfish, the only large group of vertebrates for which there has been no reference genome of any of its species yet.

This finding, published in the journal Nature Ecology & Evolution, has allowed for deciphering the evolutionary history of duplications that occurred in the ancestors of vertebrates, a group that includes humans.

“This study has important implications in the evolutionary and molecular field, as it helps us understand the changes in the genome that accompanied the origin of vertebrates and their most unique structures, such as the complex brain, the jaw and the limbs,” explains the scientist of the Department of Animal Biology of the UMA Pascual Anaya, who has coordinated the research.