Toggle light / dark theme

Rotor R550X: A full-size autonomous helicopter anyone can buy

Rotor Technologies is now in production on a full-size unmanned helicopter for civilian use. Based on the Robinson R44 Raven II, the R550X flies for more than three hours, at speeds up to 150 mph (241 km/h), carrying up to 1,200 lb (550 kg) of cargo.

According to Torklaw, helicopters have about 9.84 crashes per 100,000 hours of flight time. That’s curiously low, given their reputation and the fact that “general aircraft” have 7.28 crashes per 100,000 hours. But still, they’re notoriously tricky to fly, and there are a growing number of projects attempting to make them much easier, using simple fly-by wire joystick controls, or even simpler one-finger tablet control schemes.

Safest of all, of course, is to leave the humans on the ground altogether, and that’s what New Hampshire company Rotor Technologies has been focused on from its modest hangar at Nashua Airport, about 30 miles (50 km) outside Boston. It’s been flying two R22-based autonomous chopper prototypes since December last year, across nine locations in New Hampshire, Idaho and Oregon. It wrapped up its test campaign in November, having logged “more than 20 hours” of flight time.

DNA-folding nanorobots can manufacture limitless copies of themselves

Researchers have demonstrated a programmable nano-scale robot, made from a few strands of DNA, that’s capable of grabbing other snippets of DNA, and positioning them together to manufacture new UV-welded nano-machines – including copies of itself.

The robots, according to New Scientist, are created using just four strands of DNA, and measure just 100 nanometers across, so about a thousand of them could squeeze up into a line the width of a human hair.

The team, from New York University, the Ningbo Cixi Institute of Biomechanical Engineering, and The Chinese Academy of Sciences, says the robots surpass previous efforts, which were only able to assemble pieces into two-dimensional shapes. The new bots are able to use “multiple-axis precise folding and positioning” to “access the third dimension and more degrees of freedom.”

Google Admits Gemini AI Promotional Video Was Fabricated

Earlier this week, Google launched its Gemini AI platform that ‘wowed’ the tech world. A video posted on YouTube showcased the new AI model’s capabilities to process and reason with text, images, audio, video, and code. However, it has since come to light that Google staged the hands-on video demonstration of Gemini AI.

Bloomberg reports that Google modified interactions with Gemini AI to create the demonstration video. The video is titled “Hands-on with Gemini: Interacting with Multimodal AI.”

Google admitted in the video’s description: “For the purposes of this demo, latency has been reduced, and Gemini outputs have been shortened for brevity.” In other words, the model’s response time takes much longer than the video showed.

Europe reaches a deal on the world’s first comprehensive AI rules

LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

“Deal!” tweeted European Commissioner Thierry Breton, just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”

Using Generative AI As An Interactive Rage-Room Chatbot Raises Mental Health Guidance Qualms

In today’s column, I am continuing my ongoing series that has closely been exploring the use of generative AI as a generalized interactive chatbot that imparts mental health guidance.


But serious and sobering qualms exist. There isn’t a pre-check to validate that someone ought to be resorting to generic generative AI for such advisement. There isn’t any ironclad certification of the generative AI for use in this specific capacity. The guardrails of the generative AI might not be sufficient to avoid professing ill-advised guidance. So-called AI hallucinations can arise (as an aside, the parlance “AI hallucination” terminology is something that I demonstrably disfavor as a phraseology, for the reasons stated at the link here, but anyway generally connotes that generative AI can produce specious or fabricated answers). And so on.

All in all, you might declare that we are immersed in the Wild West of AI-based human mental health advisement, which is taking place surreptitiously yet in plain sight, and lacks the traditional kinds of checks and balances that society expects to protectively be instilled.

I’ve got a bit of an additional surprise for you. Consider a new facet that you might find notably intriguing and at the same time disturbing. It is the latest novelty approach that veers into the mental health realm by controversially using generative AI in a rage-room capacity.

/* */