Toggle light / dark theme

Power-Hungry Data Centers Are Warming Homes in the Nordics

When Finnish engineer Ari Kurvi takes a hot shower or turns up the thermostat in his apartment, he’s tapping into waste heat generated by a 75-megawatt data center 5 kilometers away. As its computer servers churn through terabytes of digital information to support video calls, car navigation systems and web searches, an elaborate system of pipes and pumps harvests the cast-off energy and feeds it to homes in the town of Mantsala in southern Finland.

Since it began operation about a decade ago, the data center has provided heat for the town. Last year, it heated the equivalent of 2,500 homes, about two-thirds of Mantsala’s needs, cutting energy costs for residents and helping to blunt the environmental downsides associated with power-hungry computing infrastructure. Some of the world’s biggest tech companies are now embracing heat recovery from data centers in an effort to become more sustainable.

Kurvi is one of the pioneers of this emerging technology: As an engineer and project manager for Hewlett Packard starting in the 1980s, he spent years working with humming stacks of hardware in hot server rooms during the freezing Finnish winters. That made him think that there must be a good way to put that wasted heat to use.


By pairing computer processing facilities with district heating systems, countries like Finland and Sweden are trying to limit their environmental downsides.

Malicious Blender model files deliver StealC infostealing malware

A Russian-linked campaign delivers the StealC V2 information stealer malware through malicious Blender files uploaded to 3D model marketplaces like CGTrader.

Blender is a powerful open-source 3D creation suite that can execute Python scripts for automation, custom user interface panels, add-ons, rendering processes, rigging tools, and pipeline integration.

If the Auto Run feature is enabled, when a user opens a character rig, a Python script can automatically load the facial controls and custom UI panels with the required buttons and sliders.

China’s 1-second film speeds rapid charge for EVs, high-power lasers

Chinese scientists claim to have reported a major jump in capacitor manufacturing earlier this month. The group has cut the production time for dielectric energy storage parts to one second.

The announcement has drawn widespread attention because it points to fast, stable energy storage for advanced defense systems and electric vehicles.

The team used a flash annealing method that heats and cools material at a rate of about 1,832°F (1,000°C) per second. This speed allows crystal films to form on a silicon wafer in a single step. Other techniques require far more time and can take from 3 minutes to 1 hour, depending on the film quality.

Perovskite photovoltaics prepare for their time in the sun

To capture more of the Sun’s spectrum, Steve Albrecht of the Technical University of Berlin and the Helmholtz Centre for Materials and Energy added a third layer of perovskite to make a so-called triple-junction cell, which could potentially offer even higher efficiencies. “It is truly a product of the future,” he says.

Other researchers are teaming perovskites with organic solar cells, forming flexible tandems suitable for indoor applications, or to cover vehicles. Yi Hou of the National University of Singapore points out that the perovskite layer filters ultraviolet light that would damage the organic cell. His team made a flexible perovskite–organic tandem5 with a record efficiency of 26.7%, and he is commercializing the technology through his company Singfilm Solar.

Despite the promising efficiency results, there was broad consensus at the conference that long-term stability is the field’s most pressing issue. Collaboration between researchers from academia, industry and national labs will be vital to fix that, says Marina Leite at the University of California, Davis: “We can work together to finally resolve the problem of stability in perovskites and truly enable this technology in the near future.”

New solar-powered Nissan EV can drive 3,000 km a year without ever plugging in

Nissan just announced a solar-powered EV based on the Nissan Sakura for this year’s Japan Mobility Show.

Built using the super popular kei car as a platform, the solar-powered Sakura promises ‘free’ motoring thanks to its solar panels.

In theory, you can drive it for a year without ever plugging it in.

New AI language-vision models transform traffic video analysis to improve road safety

New York City’s thousands of traffic cameras capture endless hours of footage each day, but analyzing that video to identify safety problems and implement improvements typically requires resources that most transportation agencies don’t have.

Now, researchers at NYU Tandon School of Engineering have developed an artificial intelligence system that can automatically identify collisions and near-misses in existing traffic video by combining language reasoning and visual intelligence, potentially transforming how cities improve road safety without major new investments.

Published in the journal Accident Analysis & Prevention, the research won New York City’s Vision Zero Research Award, an annual recognition of work that aligns with the city’s road safety priorities and offers actionable insights. Professor Kaan Ozbay, the paper’s senior author, presented the study at the eighth annual Research on the Road symposium.

The Intelligence Foundation Model Could Be The Bridge To Human Level AI

Cai Borui and Zhao Yao from Deakin University (Australia) presented a concept that they believe will bridge the gap between modern chatbots and general-purpose AI. Their proposed “Intelligence Foundation Model” (IFM) shifts the focus of AI training from merely learning surface-level data patterns to mastering the universal mechanisms of intelligence itself. By utilizing a biologically inspired “State Neural Network” architecture and a “Neuron Output Prediction” learning objective, the framework is designed to mimic the collective dynamics of biological brains and internalize how information is processed over time. This approach aims to overcome the reasoning limitations of current Large Language Models, offering a scalable path toward true Artificial General Intelligence (AGI) and theoretically laying the groundwork for the future convergence of biological and digital minds.


The Intelligence Foundation Model represents a bold new proposal in the quest to build machines that can truly think. We currently live in an era dominated by Large Language Models like ChatGPT and Gemini. These systems are incredibly impressive feats of engineering that can write poetry, solve coding errors, and summarize history. However, despite their fluency, they often lack the fundamental spark of what we consider true intelligence.

They are brilliant mimics that predict statistical patterns in text but do not actually understand the world or learn from it in real-time. A new research paper suggests that to get to the next level, we need to stop modeling language and start modeling the brain itself.

Borui Cai and Yao Zhao have introduced a concept they believe will bridge the gap between today’s chatbots and Artificial General Intelligence. Published in a preprint on arXiv, their research argues that existing foundation models suffer from severe limitations because they specialize in specific domains like vision or text. While a chatbot can tell you what a bicycle is, it does not understand the physics of riding one in the way a human does.

/* */