Toggle light / dark theme

The chameleon, a lizard known for its color-changing skin, is the inspiration behind a new electromagnetic material that could someday make vehicles and aircraft “invisible” to radar.

As reported today in the journal Science Advances, a team of UC Berkeley engineers has developed a tunable metamaterial microwave absorber that can switch between absorbing, transmitting or reflecting microwaves on demand by mimicking the chameleon’s color-changing mechanism.

“A key discovery was the ability to achieve both broadband absorption and high transmission in a single structure, offering adaptability in dynamic environments,” said Grace Gu, principal investigator of the study and assistant professor of mechanical engineering. “This flexibility has wide-ranging applications, from to advanced communication systems and energy harvesting.”

Anthropic CEO Dario Amodei said Thursday (Jan. 23) that accelerated advances in artificial intelligence (AI), particularly in biology, can lead to a doubling of human lifespans in as little as five to 10 years “if we really get this AI stuff right.”

During a panel at the World Economic Forum in Davos, Amodei called this the “grand vision.” He explained that if AI today can shrink a century’s worth of work in biology to five to 10 years, and if one believes it would take 100 years to double the average length of human life, then “a doubling of the human lifespan is not at all crazy, and if AI is able to accelerate that we may be able to get that in five to 10 years.”

Amodei also said that Anthropic is working on a “virtual collaborator,” an AI agent capable of doing higher-level tasks in the workplace such as opening Google Docs, using the Slack messaging channel, and interacting with workers. A manager will only need to check in with this AI agent “once in a while,” similar to what management does with human employees.

A study led by scientists at Rutgers University-New Brunswick has shown that specialized cells involved in how the body responds to insulin are activated in the brain after exercise, suggesting that physical activity may directly improve brain function.

The combination problem may, in fact, be a reason to favor a version of panpsychism in which consciousness is fundamental in the form of a continuous, pervasive field, analogous to spacetime. Just as spacetime and gravity have an interactive relationship, consciousness can be thought of as a fundamental “field” that interacts with, and is integral to, matter. We typically don’t think of spacetime as bits and pieces that build on each other (it’s simply everywhere), and I don’t think we should be tempted to think of consciousness, if it is indeed a pervasive field, as divisible into building blocks either. Rather, it makes more sense to talk about a field that contains a range of content —the content depending on the other forces or fields it’s interacting with. In the same way that gravity is a two-way street—matter warps spacetime and the shape of spacetime determines how matter moves—a consciousness field would imbue matter with another property, giving rise to the range of content experience d. Under this view, content is divisible, but consciousness isn’t. Therefore, consciousness is also not interacting with itself, as it would be in the act of “combining.” Considering consciousness to be fundamental allows for matter to have a specific internal character everywhere, in all of its various forms.

If consciousness is fundamental, then the questions that prompt the combination problem are potentially the same as all the other questions we might ask about spacetime in which we don’t anticipate this problem. All matter would entail consciousness, and complex systems, such as human brains, would give rise to certain types of content in those locations in spacetime. Even if each individual atom has its own experience, consciousness itself is not necessarily isolated. The matter might be isolated, and therefore the content associated with the consciousness at that location is isolated. But consciousness itself would not be said to be isolated. Again, we can think of consciousness as analogous to spacetime: How it’s affected by matter depends on the matter in question (its mass, in the case of spacetime). Similarly, a consciousness field might be “shaped” by matter in terms of experiential quality or content. And this line of thinking yields interesting questions.

In today’s AI news, Mukesh Ambani’s Reliance Industries is set to build the world’s largest data centre in Jamnagar, Gujarat, according to a *Bloomberg News* report. The facility would dwarf the current largest data center, Microsoft’s 600-megawatt site in Virginia. The project could cost between $20 billion to $30 billion.

S most popular consumer-facing AI app. The Beijing-based company introduced its closed-source multimodal model Doubao 1.5 Pro, emphasizing a “resource-efficient” training approach that it said does not sacrifice performance. ‘ + And, OpenAI’s CEO Sam Altman announced that the free tier of ChatGPT will now use the o3-mini model, marking a significant shift in how the popular AI chatbot serves its user base. In the same tweet announcing the change, Altman revealed that paid subscribers to ChatGPT Plus and Pro plans will enjoy “tons of o3-mini usage,” giving people an incentive to move to a paid account with the company.

Then, researchers at Sakana AI, an AI research lab focusing on nature-inspired algorithms, have developed a self-adaptive language model that can learn new tasks without the need for fine-tuning. Called Transformer², the model uses mathematical tricks to align its weights with user requests during inference.

In videos, Demis Hassabis, CEO of Google DeepMind joins the Big Technology Podcast with Alex Kantrowitz to discuss the cutting edge of AI and where the research is heading. In this conversation, they cover the path to artificial general intelligence, how long it will take to get there, how to build world models, and much more.

Squawk Box Then, join IBM’s Meredith Mante as she takes you on a deep dive into Lag Llama, an open-source foundation model, and shows you how to harness its power for time series forecasting. Learn how to load and preprocess data, train a model, and evaluate its performance, gaining a deeper understanding of how to leverage Lag Llama for accurate predictions.