Toggle light / dark theme

Using sequences of life-events to predict human lives

Abstract Here we represent human lives in a way that shares structural similarity to language, and we exploit this similarity to adapt natural language processing techniques to examine the evolution and predictability of human lives based on detailed event sequences.


Using registry data from Denmark, Lehmann et al. create individual-level trajectories of events related to health, education, occupation, income and address, and also apply transformer models to build rich embeddings of life-events and to predict outcomes ranging from time of death to personality.

New ultra-high speed processor to advance AI, driverless vehicles and more

A team of international scientists have developed an ultra-high speed signal processor that can analyze 400,000 real time video images concurrently, according to a paper published in Communications Engineering.

The team, led by Swinburne University of Technology’s Professor David Moss, have developed a processor that operates more than 10,000 times faster than typical electronic processors that operate in Gigabyte/s, at a record 17 Terabits/s (trillion bits per second).

The technology has for the safety and efficiency of driverless cars, and could help find beyond our solar system.

A ghostly quasiparticle rooted in a century-old Italian mystery could unlock quantum computing’s potential

On the pursuit for anyons (Majoranas) in the context of the latest progress on multiple platforms.


Already, the graphene efforts have offered “a breath of fresh air” to the community, Alicea says. “It’s one of the most promising avenues that I’ve seen in a while.” Since leaving Microsoft, Zaletel has shifted his focus to graphene. “It’s clear that this is just where you should do it now,” he says.

But not everyone believes they will have enough control over the free-moving quasiparticles in the graphene system to scale up to an array of qubits—or that they can create big enough gaps to keep out intruders. Manipulating the quarter-charge quasiparticles in graphene is much more complicated than moving the Majoranas at the ends of nanowires, Kouwenhoven says. “It’s super interesting for physics, but for a quantum computer I don’t see it.”

Just across the parking lot from Station Q’s new office, a third kind of Majorana hunt is underway. In an unassuming black building branded Google AI Quantum, past the company rock-climbing wall and surfboard rack, a dozen or so proto–quantum computers dangle from workstations, hidden inside their chandelier-like cooling systems. Their chips contain arrays of dozens of qubits based on a more conventional technology: tiny loops of superconducting wires through which current oscillates between two electrical states. These qubits, like other standard approaches, are beset with errors, but Google researchers are hoping they can marry the Majorana’s innate error protection to their quantum chip.

New brain-like transistor performs energy-efficient associative learning at room temperature

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the . In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

In a Striking Discovery, AI Shows Human-Like Memory Formation

Researchers have discovered that AI memory consolidation processes resemble those in the human brain, specifically in the hippocampus, offering potential for advancements in AI and a deeper understanding of human memory mechanisms.

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

Advancing AI through understanding human intelligence.

/* */