Toggle light / dark theme

A marine robot that can swim, crawl and glide untethered in the deepest parts of the ocean

A team of mechanical engineers at Beihang University, working with a deep-sea diving specialist from the Chinese Academy of Sciences and a mechanic from Zhejiang University, all in China, have designed, built, and tested a marine robot that can swim, crawl, and glide untethered in the deepest parts of the ocean.

In their paper published in the journal Science Robotics, the group describes the factors that went into their and how well their robot performed when tested.

Over the past several decades, underwater robots have become very important tools for studying the various parts of the world’s oceans and the creatures that live in them. More recently, it has been noted that most such craft, especially those that are sent to very deep parts of the sea, are cumbersome and not very agile.

Hydrogen-powered boats offer climate-friendly alternative to road transport

Cargo transport is responsible for an enormous carbon footprint. Between 2010 and 2018, the transport sector generated about 14% of global greenhouse gas emissions. To address this problem, experts are looking for alternative, climate-friendly solutions—not only for road transport, but also for shipping, a sector in which powering cargo ships with batteries has proved especially difficult.

One promising but under-researched solution involves small, autonomous, hydrogen-powered boats that can partially replace long-haul trucking. A research team led by business chemist Prof Stephan von Delft from the University of Münster has now examined this missing link in a new study published in Communications Engineering.

The team has mathematically modeled such a boat for the first time and carried out a - and cost analysis. “Our calculations show in which scenarios hydrogen-powered boats are not only more sustainable but also more economical compared to established transport solutions,” explains von Delft. “They are therefore relevant for policymakers and industry.”

From robot swarms to human societies, good decisions rely on the right mix of perspectives

When groups make decisions—whether it’s humans aligning on a shared idea, robots coordinating tasks, or fish deciding where to swim—not everyone contributes equally. Some individuals have more reliable information, whereas others are more connected and have higher social influence.

A new study by researchers at the Cluster of Excellence Science of Intelligence shows that a combination of uncertainty and heterogeneity plays a crucial role in how groups reach consensus.

The findings, published in Scientific Reports by Vito Mengers, Mohsen Raoufi, Oliver Brock, Heiko Hamann, and Pawel Romanczuk, show that groups make faster and more accurate decisions when individuals factor in not only the opinions of their neighbors but also their confidence about these opinions and how connected those others are within the group.

The way we train AI is fundamentally flawed

To understand exactly what’s going on, we need to back up a bit. Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you’re done.

What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don’t affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world.

In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.

Exposing Mark Rober’s Tesla Crash Story

Mark Rober’s Tesla crash story and video on self-driving cars face significant scrutiny for authenticity, bias, and misleading claims, raising doubts about his testing methods and the reliability of the technology he promotes.

Questions to inspire discussion.

Tesla Autopilot and Testing 🚗 Q: What was the main criticism of Mark Rober’s Tesla crash video? A: The video was criticized for failing to use full self-driving mode despite it being shown in the thumbnail and capable of being activated the same way as autopilot. 🔍 Q: How did Mark Rober respond to the criticism about not using full self-driving mode? A: Mark claimed it was a distinction without a difference and was confident the results would be the same if he reran the experiment in full self-driving mode. 🛑 Q: What might have caused the autopilot to disengage during the test?

Roborock’s highly anticipated robot vacuum with an arm is now available for pre-order

Unveiled at CES 2025, Roborock’s innovative robot vacuum with an arm, Saros Z70, is now available as a pre-order bundle in the US store. According to the company, consumers can get the Saros Z70 for $1,899 with another Robocok product. This device’s availability is expected in early May.

Roborock previewed the Saros Z70 to BGR a little before its official announcement at CES, and the company’s view for the future of the robot vacuum segment future is impressive. Roborock says the Saros Z70 features a foldable robotic arm with five axes that can deploy itself to clean previously obstructed areas and put away small items such as socks, small towels, tissue papers, and sandals under 300g.

While I can understand the appeal of the robot vacuum going a step further–I think the ability to climb different areas is more interesting with the latest Roborock Qrevo Curv and Saros 10R –it feels a bit too much not removing your dirty socks from the floor; you know?

The Emergence of AI-Based Wearable Sensors for Digital Health Technology: A Review

The healthcare industry faces a significant shift towards digital health technology, with a growing demand for real-time and continuous health monitoring and disease diagnostics [1, 2, 3]. The rising prevalence of chronic diseases, such as diabetes, heart disease, and cancer, coupled with an aging population, has increased the need for remote and continuous health monitoring [4, 5, 6, 7]. This has led to the emergence of artificial intelligence (AI)-based wearable sensors that can collect, analyze, and transmit real-time health data to healthcare providers so that they can make efficient decisions based on patient data. Therefore, wearable sensors have become increasingly popular due to their ability to provide a non-invasive and convenient means of monitoring patient health. These wearable sensors can track various health parameters, such as heart rate, blood pressure, oxygen saturation, skin temperature, physical activity levels, sleep patterns, and biochemical markers, such as glucose, cortisol, lactates, electrolytes, and pH and environmental parameters [1, 8, 9, 10]. Wearable health technology includes first-generation wearable technologies, such as fitness trackers, smartwatches, and current wearable sensors, and is a powerful tool in addressing healthcare challenges [2].

The data collected by wearable sensors can be analyzed using machine learning (ML) and AI algorithms to provide insights into an individual’s health status, enabling early detection of health issues and the provision of personalized healthcare [6,11]. One of the most significant advantages of AI-based wearable health technology is to promote preventive healthcare. This enables individuals and healthcare providers to proactively address symptomatic conditions before they become more severe [12,13,14,15]. Wearable devices can also encourage healthy behavior by providing incentives, reminders, and feedback to individuals, such as staying active, hydrating, eating healthily, and maintaining a healthy lifestyle by measuring hydration biomarkers and nutrients.

/* */