Toggle light / dark theme

The Grid Signature Event Library energizes utility and researcher understanding of grid behavior by providing access to datasets of waveforms from grid operations. Credit: Adam Malin/ORNL, U.S. Dept. of Energy.

The Grid Event Signature Library at Oak Ridge National Laboratory offers waveform datasets that help analyze and predict electric grid behaviors. With contributions from various utilities, the library facilitates machine learning models to forecast and mitigate grid malfunctions, enhancing grid reliability and safety.

Researchers at Oak Ridge National Laboratory have opened a new virtual library where visitors can check out waveforms instead of books.

A robotic truck equipped with a 105-ft (32-m) telescopic boom arm has just journeyed from Australia to Florida. Now the construction robot will get busy churning out up to 10 houses in a bid to become the employee of choice for building entire communities.

The truck and its accompanying brick-laying arm is known as the Hadrian X and has been developed by robotics company FBR, which first announced its prototype in 2015. That machine could complete a full-sized house in two days. Last year, FBR (which used to stand for Fastbrick Robotics), showed off the new Hadrian X which, at top speed, could stack 500 USA-format masonry blocks per hour.

The robotic vehicle/construction arm gets to work after it is loaded by pallets containing the blocks. Each block is then sent down a chute on the arm, painted with a quick-dry construction adhesive that takes the place of mortar, and is placed by a variable gripper at the end of the arm. Thanks to its impressive length, the arm is able to build structures that are three stories tall. Plus, because it’s a robot, it never needs to sleep or take a break if the weather turns nasty, so it can chug along pretty much 24/7.

The CEO of Google DeepMind has compared the IQ levels of contemporary artificial intelligence (AI) agents to domestic cats. “We’re still not even at cat intelligence yet, as a general system,” remarked Hassabis, answering a question about DeepMind’s progress in artificial general intelligence (AGI). However, research is progressing fast, with some huge cash and compute investments propelling it forward. Some expect it to eclipse human intelligence in the next half-decade.

Demis Hassabis, the co-founder and CEO of Google DeepMind, made the artificial intelligence vs. cat IQ comparison in a public discussion with Tony Blair, one of Britain’s ex-Prime Ministers. The talk was part of the Future of Britain Conference 2024, organized by the Institute for Global Change.

Hassabis highlights that his work is not focused on AI but on AGI. It gives us more perspective on how he is looking at the computer vs cat comparison. Yes, a contemporary AI can sometimes write, paint, or make music in a convincingly human-like fashion, but an ordinary house cat has a lot more general intelligence. “At the moment, we’re far from Human-level intelligence across the board,” admitted Hassabis. “But in certain areas like games playing [AI is] better than the best people in the world.”

After creating the world’s first self-organizing drone flock, researchers at Eötvös Loránd University (ELTE), Budapest, Hungary have now also demonstrated the first large-scale autonomous drone traffic solution. This fascinating new system is capable of far more than what could be executed with human pilots.

The staff of the Department of Biological Physics at Eötvös University has been working on group robotics and swarms since 2009. In 2014, they created the world’s first autonomous quadcopter flock consisting of at least ten units. The research group has now reached a new milestone by publishing the dense autonomous traffic of one hundred drones in the journal Swarm Intelligence.

But what is the difference between flocking and autonomous drone traffic?

Read & tell me what you think 🙂


There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.