Toggle light / dark theme

Here’s my new Opinion article for Newsweek on brainwave technology and AI. Check it out!


Historically, our greatest strength is our biological form, tested and evolved over millions of years. Instead of spending resources searching for ways to connect technology directly to our minds, we could find ways to use technology to protect our biological thoughts and proclivity. That might mean faraday cages around our brains that no super intelligent AIs signals could crack—as well as encryption where our code perpetually changes randomly.

Another way to protect against AI is for humans to become like bugs—a concept recently explored in the Netflix series 3 Body Problem. Companies are already working on trying to scan the brain—down to its atoms—in real time. Eventually, the hope is we’ll be able to upload our consciousnesses into computers. There’s open debate whether an upload is the real you. But for purposes of protecting ourselves against AI, another important question is how many uploads of you would there be? If AI was inundated with trillions upon trillions of uploaded human minds, it’s possible, like bugs, AI would never win a battle to get rid all of us, even if it wanted to. There would simply be too many of us in the cloud, even if there was just one of us in the flesh.

Another way to outsmart AI might be to utilize brainwave technology so that human minds are interconnected. Some scientists call this the hive mind, and it could be possible in the future to obtain millions of minds in sync without the use of AI. AI might be able to corrupt the method of human hive mind communication, but it’s still another way we could attempt to remain as intelligent as AI. After all, if you could harness a billion minds together, who knows how smart we could be?

It’s not uncommon for tech roles to now receive hundreds or thousands of applicants. Round after round of layoffs since late 2022 have sent a mass of skilled tech workers job hunting, and the wide adoption of generative AI has also upended the recruitment process, allowing people to bulk apply to roles. All of those eager for work are hitting a wall: overwhelmed recruiters and hiring managers.

WIRED spoke with seven recruiters and hiring managers across tech and other industries, who expressed trepidation about the new tech—for now, much is still unknown about how and why AI makes the choices it does, and it has a history of making biased decisions. They want to understand why the AI is making the decisions it does, and to have more room for nuance before embracing it: Not all qualified applicants are going to fit into a role perfectly, one recruiter tells WIRED.

Recruiters say they are met with droves of résumés sent through tools like LinkedIn’s Easy Apply feature, which allows people to apply for jobs quickly within the site’s platform. Then there are third-party tools to write résumés or cover letters, and there’s generative AI built into tools on sites of major players like LinkedIn and Indeed—some for job seekers, some for recruiters. These come alongside a growing number of tools to automate the recruiting process, leaving some workers wondering if a person or bot is looking at their résumé.

AI that will be able to predict the weather 3,000 times faster than a supercomputer and a program that turns a text prompt into a virtual movie set. These are just two of the applications for AI-powered by Nvidia’s technology.


Jensen Huang leads Nvidia – a tech company with a skyrocketing stock and the most advanced technology for artificial intelligence.

Understanding and reasoning about program execution is a critical skill for developers, often applied during tasks like debugging and code repair. Traditionally, developers simulate code execution mentally or through debugging tools to identify and fix errors. Despite their sophistication, large language models (LLMs) trained on code have struggled to grasp the deeper, semantic aspects of program execution beyond the superficial textual representation of code. This limitation often affects their performance in complex software engineering tasks, such as program repair, where understanding the execution flow of a program is essential.

Existing research in AI-driven software development includes several frameworks and models focused on enhancing code execution reasoning. Notable examples include CrossBeam, which leverages execution states in sequence-to-sequence models, and specialized neural architectures like the instruction pointer attention graph neural networks. Other approaches, such as the differentiable Forth interpreter and Scratchpad, integrate execution traces directly into model training to improve program synthesis and debugging capabilities. These methods pave the way for advanced reasoning about code, focusing on both the process and the dynamic states of execution within programming environments.

Researchers from Google DeepMind, Yale University, and the University of Illinois have proposed NExT, which introduces a novel approach by teaching LLMs to interpret and utilize execution traces, enabling more nuanced reasoning about program behavior during runtime. This method stands apart due to its incorporation of detailed runtime data directly into model training, fostering a deeper semantic understanding of code. By embedding execution traces as inline comments, NExT allows models to access crucial contexts that traditional training methods often overlook, making the generated rationales for code fixes more accurate and grounded in actual code execution.

ETH Zurich researchers have developed a locomotor control that can enable wheeled-legged robots to autonomously navigate various urban environments.

The robot was equipped with sophisticated navigational abilities thanks to a combination of machine learning algorithms. It was tested in the cities of Seville, Spain, and Zurich, Switzerland.

With little assistance from humans, the team’s ANYmal wheeled-legged robot accomplished autonomous operations in urban settings at the kilometer scale.