Toggle light / dark theme

Researchers from the University of Cologne and the University of Würzburg have discovered through training studies that individuals can improve their ability to distinguish between familiar and unfamiliar words, enhancing reading efficiency. Recognizing words is necessary to understand the meaning of a text. When we read, we move our eyes very efficiently and quickly from word to word. This reading flow is interrupted when we encounter a word we do not know, a situation common when learning a new language.

The words of the new language might have yet to be comprehended in their entirety, and language-specific peculiarities in spelling still need to be internalized. The team of psychologists led by junior professor Dr. Benjamin Gagl from the University of Cologne’s Faculty of Human Sciences has now found a method to optimize this process.

The current research results were published in npj Science of Learning under the title ‘Investigating lexical categorization in reading based on joint diagnostic and training approaches for language learners’. Starting in May, follow-up studies extending the training program will be carried out within a project funded by the German Research Foundation (DFG).

The dynamic characteristics of the inverters have been simulated by varying the inverter output (load) capacitance (COUT), connected to the inverter output across a 1000 nm long interconnect (assumed for simulations of the NM circuit, described in “NM circuit” subsection), from 1 aF to 1 fF. By evaluating the delay \(({t}_{{{{{{\rm{p}}}}}}})\) of the input-to-outpution, and the instantaneous current drawn from the supply during thision, the average power dissipation, and the energy-delay-product (EDP), is evaluated for both the 2D-TFET and the FinFET implementations. The higher delay of the 2D-TFET (due to its lower ON-current) translates to higher EDP, and the EDP metrics get worse as the load capacitance is further increased. In fact, as will be shown later, the main advantages of TFETs are in implementations of sparse switching circuits where its much lower OFF-current and small SS help in lowering the static power dissipation, thereby improving the overall performance.

Figure 2c shows an 11-stage ring oscillator, implemented considering both interconnect and device parasitics, and designed with minimum sized 2D-TFET and FinFET inverters. Figure 2 d, e compares the transient characteristics of the FinFET and the 2D-TFET ring oscillators, from which the frequency of oscillation is extracted to be 10 GHz and 57 MHz, respectively, corresponding to single-stage delays of 10 ps and 1.6 ns. The delay of the 2D-TFET ring oscillator is larger due to its lower ON-current. The effect of the enhanced Miller capacitance in creating large overshoots and undershoots of the output voltage in TFETs is also observed in Fig. 2e.

Static random-access memory (SRAMs), which occupy up to 70% of the processor area are the main memory elements in designing CPU cache memory offering fast memory access and can be used for synapse weight retention in a designed NM system comprising of several neurons. However, this large prevalence of SRAMs also results in a large power consumption. In fact, SRAM data access in Intel’s Loihi5 has been estimated to be more energy intensive than each neuronal spike, necessitating the development of low-power SRAM implementations. Although SRAM design with 2D-TFETs can improve the energy-efficiency, the standard SRAM design utilizes two access transistors for operation, which require bidirectional current flow, and are therefore, ill-suited for implementation with unidirectional-TFETs. This necessitates the development of a modified SRAM design, which either uses a pass transistor network of TFETs, or solitary 2D-FETs, for implementing the function of the access transistors (Fig. 2f–l).

Abstract. In recent years, brain research has indisputably entered a new epoch, driven by substantial methodological advances and digitally enabled data integration and modelling at multiple scales—from molecules to the whole brain. Major advances are emerging at the intersection of neuroscience with technology and computing. This new science of the brain combines high-quality research, data integration across multiple scales, a new culture of multidisciplinary large-scale collaboration, and translation into applications. As pioneered in Europe’s Human Brain Project (HBP), a systematic approach will be essential for meeting the coming decade’s pressing medical and technological challenges.

Single-cell multiplexing techniques (cell hashing and genetic multiplexing) combine multiple samples, optimizing sample processing and reducing costs. Cell hashing conjugates antibody-tags or chemical-oligonucleotides to cell membranes, while genetic multiplexing allows to mix genetically diverse samples and relies on aggregation of RNA reads at known genomic coordinates. We develop hadge (hashing deconvolution combined with genotype information), a Nextflow pipeline that combines 12 methods to perform both hashing-and genotype-based deconvolution. We propose a joint deconvolution strategy combining best-performing methods and demonstrate how this approach leads to the recovery of previously discarded cells in a nuclei hashing of fresh-frozen brain tissue.

Varying the parameters of weight distribution did not account for the observed amount of HD information conveyed by PoSub-FS cells (Fig. 2a). Rather, we found that the number of inputs received by each output unit was a key factor influencing the amount of HD information (Extended Data Fig. 5e). Varying both weight distribution and the number of input units, we obtained a distribution of HD information in output tuning curves that matched the real data (Extended Data Fig. 5f), revealing that the tuning of PoSub-FS cells can be used to estimate both the distribution of weights and the number of input neurons. Notably, under optimal network conditions, Isomap projection of output tuning curve auto-correlograms has a similar geometry to that of real PoSub-FS cells (Extended Data Fig. 5g), confirming similar distribution of tuning shapes.

To further quantify the relative contributions of ADN and local PoSub inputs to PoSub-FS cell tuning, we expanded the simulation to include the following two inputs: one with tuning curve widths corresponding to ADN-HD cells and one with tuning curve widths corresponding to PoSub-HD cells (Fig. 4h, left). We then trained the model using gradient descent to find the variances and means of input weights that result in the best fit between the simulated output and real data. The combination of parameters that best described the real data resulted in ADN inputs distributed in a near Gaussian-like manner but a heavy-tailed distribution of PoSub-HD inputs (Fig. 4h, middle). Using these distribution parameters, we performed simulations to determine the contribution of ADN-HD and PoSub-HD inputs to the output tuning curves and established that PoSub-FS cell-like outputs are best explained by flat, high firing rate inputs from ADN-HD cells and low firing rate, HD-modulated inputs from PoSub-HD cells (Fig. 4h, right).

Our simulations, complemented by direct analytical derivation (detailed in the Supplementary Methods), not only support the hypothesis that the symmetries observed in PoSub-FS cell tuning curves originate from local cortical circuits but also demonstrate that these symmetries emerge from strongly skewed distributions of synaptic weights.