Toggle light / dark theme

Tesla has shared a video of a hands-free drive demonstration of its Full Self-Driving suite in Austin. The FSD suite is not available to customers in a hands-free nature, but Tesla disabled the requirement for a new video it shared on X, formerly known as Twitter.

Tesla shared the video to demonstrate the capabilities of Software Version 11.4.7, which is the current version of the FSD Beta program.

The automaker describes in the Tweet in put up how the Full Self-Driving suite improves through data-driven techniques that refine the capabilities through analysis of other drivers’ behavior and normal navigation habits.

That spider you squished? It could have been used for science!

At least, that’s what Faye Yap and Daniel Preston think. Yap is a mechanical engineering PhD student in Preston’s lab at Rice University, where she co-authored a paper on reanimating spider corpses to create grippers, or tiny machines used to pick up and put down delicate objects. Yap and Preston dubbed this use of biotic materials for robotic parts “necrobotics” – and think this technique could one day become a cheap, green addition to the field.

Autonomous shopping carts that follow grocery store customers and robots that pick ripe cucumbers faster than humans may grab headlines, but the most compelling applications of AI and ML technology are behind the scenes. Increasingly, organizations are finding substantial efficiency gains by applying AI-and ML-powered tools to back-office procedures such as document processing, data entry, employee onboarding, and workflow automation.

The power of automation to augment productivity in the back office has been clear for decades, but the recent emergence of advanced AI and ML tools offers a step change in what automation can accomplish, including in highly regulated industries such as health care.

Transformers are machine learning models designed to uncover and track patterns in sequential data, such as text sequences. In recent years, these models have become increasingly sophisticated, forming the backbone of popular conversational platforms, such as ChatGPT.

While existing transformers have achieved good results in a variety of tasks, their performance often declines significantly when processing longer sequences. This is due to their limited storage capacity, or in other words the small amount of data they can store and analyze at once.

Researchers at Sungkyunkwan University in South Korea recently developed a new memory system that could help to improve the performance of transformers on more characterized by longer data sequences. This system, introduced in a paper published on the arXiv preprint server, is inspired by a prominent theory of human memory, known as Hebbian theory.

Dogs of War bots.


Armed with a rocket launcher or other kinds of weapons, including small arms, a quadrupedal robot could also just be used to scout ahead of friendly forces, and then have the ability to immediately engage any threats it finds.

Uncrewed ground systems like this have the ability to get in and out of spaces where a person might not be able to at all, as well, which could again be particularly useful when maneuvering through dense urban environments. The U.S. military sees operations in large built-up areas as a key component of any future major conflict.

This is, of course, not the first time that the U.S. military has explored the idea of a small armed uncrewed ground vehicle that could accompany even very small units. Designs based on tracked robots primarily designed for explosive ordnance disposal work have been and continue to be developed.

Here’s my latest Opinion piece just out for Newsweek…focusing on cyborg rights.


Over the past half-century, the microprocessor’s capacity has doubled approximately every 18–24 months, and some experts predict that by 2030, machine intelligence could surpass human capabilities. The question then arises: When machines reach human-level intelligence, should they be granted protection and rights? Will they desire and perhaps even demand such rights?

Beyond advancements in microprocessors, we’re witnessing breakthroughs in genetic editing, stem cells, and 3D bioprinting, all which also hold the potential to help create cyborg entities displaying consciousness and intelligence. Notably, Yale University’s experiments stimulating dead pig brains have ignited debates in the animal rights realm, raising questions about the ethical implications of reviving consciousness.

Amid these emerging scientific frontiers, a void in ethical guidelines exists, akin to the Wild West of the impending cyborg age. To address these ethical challenges, a slew of futurist-oriented bills of rights have emerged in the last decade. One of the most prominent is the Transhumanist Bill of Rights, which is in its third revision through crowdsourcing and was published verbatim by Wired in 2018.

These cyborg bills encompass a broad array of protections, including safeguards for thinking robots, gender recognition for virtual intelligences, regulations for genetically engineered sapient beings, and the defense of freedoms for biohackers modifying their bodies. Some also incorporate tech-driven rules to combat environmental threats like asteroids, pandemics, and nuclear war.

Recent studies have found that Gires-Tournois (GT) biosensors, a type of nanophotonic resonator, can detect minuscule virus particles and produce colorful micrographs (images taken through a microscope) of viral loads. But they suffer from visual artifacts and non-reproducibility, limiting their utilization.

In a recent breakthrough, an international team of researchers, led by Professor Young Min Song from the School of Electrical Engineering and Computer Science at Gwangju Institute of Science and Technology in Korea, has leveraged artificial intelligence (AI) to overcome this problem. Their work was published in Nano Today.

Rapid and on-site diagnostic technologies for identifying and quantifying viruses are essential for planning treatment strategies for infected patients and preventing further spread of the infection. The COVID-19 pandemic has highlighted the need for accurate yet decentralized that do not involve complex and time-consuming processes needed for conventional laboratory-based tests.

Quantum mechanics is full of weird phenomena, but perhaps none as weird as the role measurement plays in the theory. Since a measurement tends to destroy the “quantumness” of a system, it seems to be the mysterious link between the quantum and classical world. And in a large system of quantum bits of information, known as “qubits,” the effect of measurements can induce dramatically new behavior, even driving the emergence of entirely new phases of quantum information.

This happens when two competing effects come to a head: interactions and measurement. In a quantum system, when the qubits interact with one another, their information becomes shared nonlocally in an “entangled state.” But if you measure the system, the is destroyed. The battle between measurement and interactions leads to two : one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed.

As reported in the journal Nature, researchers at Google Quantum AI and Stanford University have observed the crossover between these two regimes—known as a “measurement-induced phase transition”—in a system of up to 70 qubits. This is by far the largest system in which measurement-induced effects have been explored.

We know remarkably little about how AI systems work, so how will we know if AI becomes conscious?

Many people in AI will be familiar with the story of the Mechanical Turk. It was a chess-playing machine built in 1,770, and it was so good its opponents were tricked into believing it was supernaturally powerful. In reality, the machine had space for a human to hide in it and control it. The hoax went on for 84 years. That’s three generations!

History is rich with examples of people trying to breathe life into inanimate objects, and of people selling hacks and tricks as “magic.” But this very human desire to believe in consciousness in machines has never matched up with reality.

Smartphone sales have had their worst quarterly performance in over a decade, a fact that raises two big questions. Have the latest models finally bored the market with mere incremental improvements? And if they have, what will the next form factor (and function) be? Today a deep tech startup called Xpanceo is announcing $40 million in funding from a single investor, Opportunity Ventures in Hong Kong, to pursue its take on one of the possible answers to that question: computing devices in the form of smart contact lenses.

The company wants to make tech more simple, and it believes the way to do that is to make it seamless and more connected to how we operate every day. “All current computers will be obsolete [because] they’re not interchangeable,” said Roman Axelrod, who co-founded the startup with material scientist and physicist Valentyn S. Volkov. “We are enslaved by gadgets.”

With a focus on new materials and moving away from silicon-based processing and towards new approaches to using optoelectronics, Xpanceo’s modest ambition, Axelrod said in an interview, is to “merge all the gadgets into one, to provide humanity with a gadget with an infinite screen. What we aim for is to create the next generation of computing.”

Xpanceo was founded in 2021 and is based out Dubai, and before now it has been bootstrapped. Its team of more 50 scientists and engineers has mainly, up to now, been working on different prototypes of lenses and all of the hard work that goes into that. The move away from silicon and to optoelectronics, for example, has driven a new need for materials that can emit and read light that are ever-smaller, Volkov said. The company has likened developments of 2D materials like graphene to what it is pursuing with new materials for contact lenses.

“We have kind of developed our own niche [in 2D materials] and now we use this knowledge as a backbone for our contact lens prototypes,” Volkov said in an interview.

Alongside this, the company has developed an AI platform to help develop its frameworks. It describes “neural interfacing” as the technique it will use to give wearers of its lenses full control over applications without them needing to use “awkward” eye movements or extra controllers. (Some prototypes of other smart or connected lenses involve users lowering eyelids to change functions, for example.)