Toggle light / dark theme

Here’s my latest Opinion piece just out for Newsweek…focusing on cyborg rights.


Over the past half-century, the microprocessor’s capacity has doubled approximately every 18–24 months, and some experts predict that by 2030, machine intelligence could surpass human capabilities. The question then arises: When machines reach human-level intelligence, should they be granted protection and rights? Will they desire and perhaps even demand such rights?

Beyond advancements in microprocessors, we’re witnessing breakthroughs in genetic editing, stem cells, and 3D bioprinting, all which also hold the potential to help create cyborg entities displaying consciousness and intelligence. Notably, Yale University’s experiments stimulating dead pig brains have ignited debates in the animal rights realm, raising questions about the ethical implications of reviving consciousness.

Amid these emerging scientific frontiers, a void in ethical guidelines exists, akin to the Wild West of the impending cyborg age. To address these ethical challenges, a slew of futurist-oriented bills of rights have emerged in the last decade. One of the most prominent is the Transhumanist Bill of Rights, which is in its third revision through crowdsourcing and was published verbatim by Wired in 2018.

Recent studies have found that Gires-Tournois (GT) biosensors, a type of nanophotonic resonator, can detect minuscule virus particles and produce colorful micrographs (images taken through a microscope) of viral loads. But they suffer from visual artifacts and non-reproducibility, limiting their utilization.

In a recent breakthrough, an international team of researchers, led by Professor Young Min Song from the School of Electrical Engineering and Computer Science at Gwangju Institute of Science and Technology in Korea, has leveraged artificial intelligence (AI) to overcome this problem. Their work was published in Nano Today.

Rapid and on-site diagnostic technologies for identifying and quantifying viruses are essential for planning treatment strategies for infected patients and preventing further spread of the infection. The COVID-19 pandemic has highlighted the need for accurate yet decentralized that do not involve complex and time-consuming processes needed for conventional laboratory-based tests.

Quantum mechanics is full of weird phenomena, but perhaps none as weird as the role measurement plays in the theory. Since a measurement tends to destroy the “quantumness” of a system, it seems to be the mysterious link between the quantum and classical world. And in a large system of quantum bits of information, known as “qubits,” the effect of measurements can induce dramatically new behavior, even driving the emergence of entirely new phases of quantum information.

This happens when two competing effects come to a head: interactions and measurement. In a quantum system, when the qubits interact with one another, their information becomes shared nonlocally in an “entangled state.” But if you measure the system, the is destroyed. The battle between measurement and interactions leads to two : one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed.

As reported in the journal Nature, researchers at Google Quantum AI and Stanford University have observed the crossover between these two regimes—known as a “measurement-induced phase transition”—in a system of up to 70 qubits. This is by far the largest system in which measurement-induced effects have been explored.

We know remarkably little about how AI systems work, so how will we know if AI becomes conscious?

Many people in AI will be familiar with the story of the Mechanical Turk. It was a chess-playing machine built in 1,770, and it was so good its opponents were tricked into believing it was supernaturally powerful. In reality, the machine had space for a human to hide in it and control it. The hoax went on for 84 years. That’s three generations!

History is rich with examples of people trying to breathe life into inanimate objects, and of people selling hacks and tricks as “magic.” But this very human desire to believe in consciousness in machines has never matched up with reality.

Smartphone sales have had their worst quarterly performance in over a decade, a fact that raises two big questions. Have the latest models finally bored the market with mere incremental improvements? And if they have, what will the next form factor (and function) be? Today a deep tech startup called Xpanceo is announcing $40 million in funding from a single investor, Opportunity Ventures in Hong Kong, to pursue its take on one of the possible answers to that question: computing devices in the form of smart contact lenses.

The company wants to make tech more simple, and it believes the way to do that is to make it seamless and more connected to how we operate every day. “All current computers will be obsolete [because] they’re not interchangeable,” said Roman Axelrod, who co-founded the startup with material scientist and physicist Valentyn S. Volkov. “We are enslaved by gadgets.”

With a focus on new materials and moving away from silicon-based processing and towards new approaches to using optoelectronics, Xpanceo’s modest ambition, Axelrod said in an interview, is to “merge all the gadgets into one, to provide humanity with a gadget with an infinite screen. What we aim for is to create the next generation of computing.”

Xpanceo was founded in 2021 and is based out Dubai, and before now it has been bootstrapped. Its team of more 50 scientists and engineers has mainly, up to now, been working on different prototypes of lenses and all of the hard work that goes into that. The move away from silicon and to optoelectronics, for example, has driven a new need for materials that can emit and read light that are ever-smaller, Volkov said. The company has likened developments of 2D materials like graphene to what it is pursuing with new materials for contact lenses.

“We have kind of developed our own niche [in 2D materials] and now we use this knowledge as a backbone for our contact lens prototypes,” Volkov said in an interview.

Adobe will premiere the first-ever TV commercial powered by its Firefly generative AI during high-profile sports broadcasts on Monday night. The commercial for Adobe Photoshop highlights creative capabilities enabled by the company’s AI technology.

Set to air during MLB playoffs and Monday Night Football, two of the most-watched live events on television, the new Adobe spot will showcase Photoshop’s Firefly-powered Generative Fill feature. Generative Fill uses AI to transform images based on text prompts.

With Adobe’s new commercial, generative AI will enter the mainstream spotlight, reaching audiences beyond just tech circles. While early adopters have embraced AI tools, a recent study found 44% of U.S. workers have yet to use generative AI, indicating its capabilities remain unknown to many.

The high-profile ad also lets Adobe showcase its AI leadership against rivals like OpenAI’s DALL-E in the increasingly competitive space of generative design. With AI capabilities now embedded in many tools, the commercial provides a chance for Adobe to demonstrate its edge and differentiate Photoshop for creative professionals.

Since launching Firefly in March, over 3 billion AI-generated images have been created by users, establishing it as the most popular commercial model globally. Adobe is betting that primetime viewers are now ready to embrace the creative potential of its AI.

Jailbroken large language models (LLMs) and generative AI chatbots — the kind any hacker can access on the open Web — are capable of providing in-depth, accurate instructions for carrying out large-scale acts of destruction, including bio-weapons attacks.

An alarming new study from RAND, the US nonprofit think tank, offers a canary in the coal mine for how bad actors might weaponize this technology in the (possibly near) future.

In an experiment, experts asked an uncensored LLM to plot out theoretical biological weapons attacks against large populations. The AI algorithm was detailed in its response and more than forthcoming in its advice on how to cause the most damage possible, and acquire relevant chemicals without raising suspicion.

“For AI to be motivated towards a goal, it must know what it wants.”

The possible board combinations in a game of Go are more than the number of atoms in the known universe, but it’s still a finite number. In the real world, there are infinite possibilities for what might happen next, and uncertainty is rampant. How realistic, then, is AGI?

A recent research paper published in Frontiers in Ecology and Evolution explores obstacles toward AGI. Biological systems with degrees of general intelligence — organisms ranging from the humble microbes to the humans reading this — are capable of improvising to meet their goals. What prevents AI from improvising?