drones – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Sun, 30 Mar 2025 18:22:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Brain-like computer steers rolling robot with 0.25% of the power needed by conventional controllers https://lifeboat.com/blog/2025/03/brain-like-computer-steers-rolling-robot-with-0-25-of-the-power-needed-by-conventional-controllers https://lifeboat.com/blog/2025/03/brain-like-computer-steers-rolling-robot-with-0-25-of-the-power-needed-by-conventional-controllers#respond Sun, 30 Mar 2025 18:22:30 +0000 https://lifeboat.com/blog/2025/03/brain-like-computer-steers-rolling-robot-with-0-25-of-the-power-needed-by-conventional-controllers

A smaller, lighter and more energy-efficient computer, demonstrated at the University of Michigan, could help save weight and power for autonomous drones and rovers, with implications for autonomous vehicles more broadly.

The autonomous controller has among the lowest power requirements reported, according to the study published in Science Advances. It operates at a mere 12.5 microwatts—in the ballpark of a pacemaker.

In testing, a rolling robot using the controller was able to pursue a target zig-zagging down a hallway with the same speed and accuracy as with a conventional digital controller. In a second trial, with a lever-arm that automatically repositioned itself, the new controller did just as well.

]]>
https://lifeboat.com/blog/2025/03/brain-like-computer-steers-rolling-robot-with-0-25-of-the-power-needed-by-conventional-controllers/feed 0
Maxar launches GPS-alternative navigation system for drones https://lifeboat.com/blog/2025/03/maxar-launches-gps-alternative-navigation-system-for-drones https://lifeboat.com/blog/2025/03/maxar-launches-gps-alternative-navigation-system-for-drones#respond Tue, 25 Mar 2025 22:06:41 +0000 https://lifeboat.com/blog/2025/03/maxar-launches-gps-alternative-navigation-system-for-drones

WASHINGTON — Maxar Intelligence developed a visual-based navigation technology that enables aerial drones to operate without relying on GPS, the company announced March 25.

The software, called Raptor, provides a terrain-based positioning system for drones in GPS-denied environments by leveraging detailed 3D models created from Maxar’s satellite imagery. Instead of using satellite signals, a drone equipped with Raptor compares its real-time camera feed with a pre-existing 3D terrain model to determine its position and orientation.

Peter Wilczynski, chief product officer at Maxar Intelligence, explained that the Raptor software has three main components. One is installed directly on the drone, enabling real-time position determination. Another application georegisters the drone’s video feed with Maxar’s 3D terrain data. A separate laptop-based application works alongside drone controllers, allowing operators to extract precise ground coordinates from aerial video feeds.

]]>
https://lifeboat.com/blog/2025/03/maxar-launches-gps-alternative-navigation-system-for-drones/feed 0
Russia has created a universal engine for drones https://lifeboat.com/blog/2025/03/russia-has-created-a-universal-engine-for-drones https://lifeboat.com/blog/2025/03/russia-has-created-a-universal-engine-for-drones#respond Tue, 25 Mar 2025 06:27:18 +0000 https://lifeboat.com/blog/2025/03/russia-has-created-a-universal-engine-for-drones

A new electric motor for drones has been developed at the Advanced Engineering School of the Moscow Aviation Institute. They can be equipped with various types of propeller-driven unmanned aerial vehicles (UAVs).

]]>
https://lifeboat.com/blog/2025/03/russia-has-created-a-universal-engine-for-drones/feed 0
The future of mental privacy in the neurotechnology age | Nita Farahany https://lifeboat.com/blog/2025/03/the-future-of-mental-privacy-in-the-neurotechnology-age-nita-farahany https://lifeboat.com/blog/2025/03/the-future-of-mental-privacy-in-the-neurotechnology-age-nita-farahany#respond Tue, 18 Mar 2025 18:37:09 +0000 https://lifeboat.com/blog/2025/03/the-future-of-mental-privacy-in-the-neurotechnology-age-nita-farahany

Originally released December 2023._ In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.

They cover:
• How close we are to actual mind reading.
• How hacking neural interfaces could cure depression.
• How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
• How close we are to being able to unlock our phones by singing a song in our heads.
• How neurodata has been used for interrogations, and even criminal prosecutions.
• The possibility of linking brains to the point where you could experience exactly the same thing as another person.
• Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
• And plenty more.

In this episode:
• Luisa’s intro [00:00:00]
• Applications of new neurotechnology and security and surveillance [00:04:25]
• Controlling swarms of drones [00:12:34]
• Brain-to-brain communication [00:20:18]
• Identifying targets subconsciously [00:33:08]
• Neuroweapons [00:37:11]
• Neurodata and mental privacy [00:44:53]
• Neurodata in criminal cases [00:58:30]
• Effects in the workplace [01:05:45]
• Rapid advances [01:18:03]
• Regulation and cognitive rights [01:24:04]
• Brain-computer interfaces and cognitive enhancement [01:26:24]
• The risks of getting really deep into someone’s brain [01:41:52]
• Best-case and worst-case scenarios [01:49:00]
• Current work in this space [01:51:03]
• Watching kids grow up [01:57:03]

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.

Learn more, read the summary and find the full transcript on the 80,000 Hours website:

Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

]]>
https://lifeboat.com/blog/2025/03/the-future-of-mental-privacy-in-the-neurotechnology-age-nita-farahany/feed 0
Direct on-Chip Optical Communication between Nano Optoelectronic DevicesClick to copy article linkArticle link copied! https://lifeboat.com/blog/2025/03/direct-on-chip-optical-communication-between-nano-optoelectronic-devicesclick-to-copy-article-linkarticle-link-copied https://lifeboat.com/blog/2025/03/direct-on-chip-optical-communication-between-nano-optoelectronic-devicesclick-to-copy-article-linkarticle-link-copied#respond Sun, 16 Mar 2025 21:22:56 +0000 https://lifeboat.com/blog/2025/03/direct-on-chip-optical-communication-between-nano-optoelectronic-devicesclick-to-copy-article-linkarticle-link-copied

Contemplate a future where tiny, energy-efficient brain-like networks guide autonomous machines—like drones or robots—through complex environments. To make this a reality, scientists are developing ultra-compact communication systems where light, rather than electricity, carries information between nanoscale devices.

In this study, researchers achieved a breakthrough by enabling direct on-chip communication between tiny light-sensing devices called InP nanowire photodiodes on a silicon chip. This means that light can now travel efficiently from one nanoscale component to another, creating a faster and more energy-efficient network. The system proved robust, handling signals with up to 5-bit resolution, which is similar to the information-processing levels in biological neural networks. Remarkably, it operates with minimal energy—just 0.5 microwatts, which is lower than what conventional hardware needs.

S a quadrillionth of a joule!) and allow one emitter to communicate with hundreds of other nodes simultaneously. This efficient, scalable design meets the requirements for mimicking biological neural activity, especially in tasks like autonomous navigation. + In essence, this research moves us closer to creating compact, light-powered neural networks that could one day drive intelligent machines, all while saving space and energy.

]]>
https://lifeboat.com/blog/2025/03/direct-on-chip-optical-communication-between-nano-optoelectronic-devicesclick-to-copy-article-linkarticle-link-copied/feed 0
Scientists develop AI-powered digital twin model that can control and adapt its physical doppelganger https://lifeboat.com/blog/2025/03/scientists-develop-ai-powered-digital-twin-model-that-can-control-and-adapt-its-physical-doppelganger https://lifeboat.com/blog/2025/03/scientists-develop-ai-powered-digital-twin-model-that-can-control-and-adapt-its-physical-doppelganger#respond Thu, 13 Mar 2025 18:07:48 +0000 https://lifeboat.com/blog/2025/03/scientists-develop-ai-powered-digital-twin-model-that-can-control-and-adapt-its-physical-doppelganger

Scientists say they have developed a new AI-assisted model of a digital twin with the ability to adapt and control the physical machine in real time.

The discovery, reported in the journal IEEE Access, adds a new dimension to the digital copies of real-world machines, like robots, drones, or even autonomous cars, according to the authors.

Digital twins are exact replicas of things in the physical world. They are likened to video game versions of real machines with which they digitally twin, and are constantly updated with real-time data.

]]>
https://lifeboat.com/blog/2025/03/scientists-develop-ai-powered-digital-twin-model-that-can-control-and-adapt-its-physical-doppelganger/feed 0
AI could supercharge human collective intelligence in everything from disaster relief to medical research https://lifeboat.com/blog/2025/03/ai-could-supercharge-human-collective-intelligence-in-everything-from-disaster-relief-to-medical-research https://lifeboat.com/blog/2025/03/ai-could-supercharge-human-collective-intelligence-in-everything-from-disaster-relief-to-medical-research#respond Sat, 08 Mar 2025 23:05:37 +0000 https://lifeboat.com/blog/2025/03/ai-could-supercharge-human-collective-intelligence-in-everything-from-disaster-relief-to-medical-research

Imagine a large city recovering from a devastating hurricane. Roads are flooded, the power is down, and local authorities are overwhelmed. Emergency responders are doing their best, but the chaos is massive.

AI-controlled drones survey the damage from above, while process and data from sensors on the ground and air to identify which neighborhoods are most vulnerable.

Meanwhile, AI-equipped robots are deployed to deliver food, water and into areas that human responders can’t reach. Emergency teams, guided and coordinated by AI and the insights it produces, are able to prioritize their efforts, sending rescue squads where they’re needed most.

]]>
https://lifeboat.com/blog/2025/03/ai-could-supercharge-human-collective-intelligence-in-everything-from-disaster-relief-to-medical-research/feed 0
Cursor In Talks To Raise Funds At A $10B Valuation As The AI Coding Sector Booms https://lifeboat.com/blog/2025/03/cursor-in-talks-to-raise-funds-at-a-10b-valuation-as-the-ai-coding-sector-booms https://lifeboat.com/blog/2025/03/cursor-in-talks-to-raise-funds-at-a-10b-valuation-as-the-ai-coding-sector-booms#respond Sat, 08 Mar 2025 18:17:18 +0000 https://lifeboat.com/blog/2025/03/cursor-in-talks-to-raise-funds-at-a-10b-valuation-as-the-ai-coding-sector-booms

In today’s AI news, Investor interest in AI coding assistants is exploding. Anysphere, the developer of AI-powered coding assistant Cursor, is in talks with venture capitalists to raise capital at a valuation of nearly $10 billion, Bloomberg reported. The round, if it transpires, would come about three months after Anysphere completed its previous fundraise of $100 million at a pre-money valuation of $2.5 billion.

And, there’s a new voice model in town, and it’s called Sesame. As he so often does, John Werner got a lot of information on this new technology from Nathaniel Whittemore at AI Daily Brief, where he covered interest in this conversational AI. Quoting Deedy Das of Menlo Ventures calling Sesame “the GPT-3 moment for voice,” Whittemore talked about what he called an “incredible explosion” of voice-based models happening now.

In other advancements, along with the new M4 MacBook Pro series Apple is releasing, the company is also quite proud of the new Mac mini. The Mac mini is arguably the more radical of the two. Apple’s diminutive computer has now received its first major design overhaul in 13 years. And this new tiny computer is the perfect machine for experimenting with and learning AI.

S biggest defense tech startups by valuation, raising $240 million at a $5.3 billion valuation in its latest round. Shield AI, the San Diego defense tech startup that builds drones and other AI-powered military systems, has raised a $240 million round at a $5.3 billion valuation, it announced today.” + In videos, while he hardly needs an introduction, few leaders have shaped the future of technology quite like Satya Nadella. He stepped into Microsoft’s top job at a catalytic moment—making bold bets on the cloud, embedding AI into the fabric of computing, all while staying true to Microsoft’s vision of becoming a “software factory.”

T just think, it delivers results. Manus excels at various tasks in work and life, getting everything done while you rest. + Then, join Boris Starkov and Anton Pidkuiko, the developers behind GibberLink, for a fireside chat with Luke Harries from ElevenLabs. On February 24, Georgi Gerganov, the creator of the GGwave protocol, showcased their demo at the ElevenLabs London hackathon on X, garnering attention from around the world—including Forbes, TechCrunch, and the entire developer community.

We close out with, Sam Witteveen looking at the latest release from Mistral AI, which is their Mistral OCR model. He looks at how it works and how it compares to other models, as well as how you can get started using it with code.

Thats all for today, but AI is moving fast — subscribe and follow for more Neural News.

]]>
https://lifeboat.com/blog/2025/03/cursor-in-talks-to-raise-funds-at-a-10b-valuation-as-the-ai-coding-sector-booms/feed 0
Zero Zero Robotics unveils world’s first sub-250g bi-copter drone with ‘lightning fast acceleration’ and ‘unmatched agility’ https://lifeboat.com/blog/2025/03/zero-zero-robotics-unveils-worlds-first-sub-250g-bi-copter-drone-with-lightning-fast-acceleration-and-unmatched-agility https://lifeboat.com/blog/2025/03/zero-zero-robotics-unveils-worlds-first-sub-250g-bi-copter-drone-with-lightning-fast-acceleration-and-unmatched-agility#respond Tue, 04 Mar 2025 13:08:44 +0000 https://lifeboat.com/blog/2025/03/zero-zero-robotics-unveils-worlds-first-sub-250g-bi-copter-drone-with-lightning-fast-acceleration-and-unmatched-agility

Forget DJI drones – the Falcon Mini looks like way more fun to fly.

]]>
https://lifeboat.com/blog/2025/03/zero-zero-robotics-unveils-worlds-first-sub-250g-bi-copter-drone-with-lightning-fast-acceleration-and-unmatched-agility/feed 0
Gaussian Process Regression Hybrid Models for the Top-of-Atmosphere Retrieval of Vegetation Traits Applied to PRISMA and EnMAP Imagery https://lifeboat.com/blog/2025/03/gaussian-process-regression-hybrid-models-for-the-top-of-atmosphere-retrieval-of-vegetation-traits-applied-to-prisma-and-enmap-imagery https://lifeboat.com/blog/2025/03/gaussian-process-regression-hybrid-models-for-the-top-of-atmosphere-retrieval-of-vegetation-traits-applied-to-prisma-and-enmap-imagery#respond Sun, 02 Mar 2025 16:30:43 +0000 https://lifeboat.com/blog/2025/03/gaussian-process-regression-hybrid-models-for-the-top-of-atmosphere-retrieval-of-vegetation-traits-applied-to-prisma-and-enmap-imagery

Satellite-based optical remote sensing from missions such as ESA’s Sentinel-2 (S2) have emerged as valuable tools for continuously monitoring the Earth’s surface, thus making them particularly useful for quantifying key cropland traits in the context of sustainable agriculture [1]. Upcoming operational imaging spectroscopy satellite missions will have an improved capability to routinely acquire spectral data over vast cultivated regions, thereby providing an entire suite of products for agricultural system management [2]. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) [3] will complement the multispectral Copernicus S2 mission, thus providing enhanced services for sustainable agriculture [4, 5]. To use satellite spectral data for quantifying vegetation traits, it is crucial to mitigate the absorption and scattering effects caused by molecules and aerosols in the atmosphere from the measured satellite data. This data processing step, known as atmospheric correction, converts top-of-atmosphere (TOA) radiance data into bottom-of-atmosphere (BOA) reflectance, and it is one of the most challenging satellite data processing steps e.g., [6, 7, 8]. Atmospheric correction relies on the inversion of an atmospheric radiative transfer model (RTM) leading to the obtaining of surface reflectance, typically through the interpolation of large precomputed lookup tables (LUTs) [9, 10]. The LUT interpolation errors, the intrinsic uncertainties from the atmospheric RTMs, and the ill posedness of the inversion of atmospheric characteristics generate uncertainties in atmospheric correction [11]. Also, usually topographic, adjacency, and bidirectional surface reflectance corrections are applied sequentially in processing chains, which can potentially accumulate errors in the BOA reflectance data [6]. Thus, despite its importance, the inversion of surface reflectance data unavoidably introduces uncertainties that can affect downstream analyses and impact the accuracy and reliability of subsequent products and algorithms, such as vegetation trait retrieval [12]. To put it another way, owing to the critical role of atmospheric correction in remote sensing, the accuracy of vegetation trait retrievals is prone to uncertainty when atmospheric correction is not properly performed [13].

Although advanced atmospheric correction schemes became an integral part of the operational processing of satellite missions e.g., [9,14,15], standardised exhaustive atmospheric correction schemes in drone, airborne, or scientific satellite missions remain less prevalent e.g., [16,17]. The complexity of atmospheric correction further increases when moving from multispectral to hyperspectral data, where rigorous atmospheric correction needs to be applied to hundreds of narrow contiguous spectral bands e.g., [6,8,18]. For this reason, and to bypass these challenges, several studies have instead proposed to infer vegetation traits directly from radiance data at the top of the atmosphere [12,19,20,21,22,23,24,25,26].

]]>
https://lifeboat.com/blog/2025/03/gaussian-process-regression-hybrid-models-for-the-top-of-atmosphere-retrieval-of-vegetation-traits-applied-to-prisma-and-enmap-imagery/feed 0