Toggle light / dark theme

Researchers engineer a material that can perform different tasks depending on temperature

Researchers report that they have developed a new composite material designed to change behaviors depending on temperature in order to perform specific tasks. These materials are poised to be part of the next generation of autonomous robotics that will interact with the environment.

The new study conducted by University of Illinois Urbana-Champaign civil and environmental engineering professor Shelly Zhang and graduate student Weichen Li, in collaboration with professor Tian Chen and graduate student Yue Wang from the University of Houston, uses , two distinct polymers, and 3D printing to reverse engineer a material that expands and contracts in response to change with or without .

The study findings are reported in the journal Science Advances.

OpenAI CEO Sam Altman Says His Company Is Now Building GPT-5

At an MIT event in March, OpenAI cofounder and CEO Sam Altman said his team wasn’t yet training its next AI, GPT-5. “We are not and won’t for some time,” he told the audience.

This week, however, new details about GPT-5’s status emerged.

In an interview, Altman told the Financial Times the company is now working to develop GPT-5. Though the article did not specify whether the model is in training—it likely isn’t—Altman did say it would need more data. The data would come from public online sources—which is how such algorithms, called large language models, have previously been trained—and proprietary private datasets.

With AI chatbots, will Elon Musk and the ultra-rich replace the masses?

Elon Musk is hyping the imminent release of his ChatGPT competitor Grok, yet another example of how his entire personality is just itself a biological LLM made by ingesting all of Reddit and 4chan. Grok already seems patterned in many ways off of the worst of Elon’s indulgences, with the sense of humor of a desperately unfunny and regressive internet troll, and biases informed by a man whose horrible, dangerous biases are fully invisible to himself.

There are all kinds of reasons to be wary of Grok, including the standard reasons to be wary of any current LLM-based AI technology, like hallucinations and inaccuracies. Layer on Elon Musk’s recent track record for disastrous social sensitivity and generally harmful approach to world-shaping issues, and we’re already looking at even more reason for concern. But the real risk probably isn’t yet so easy to grok, just because we have little understanding yet of the extent of the impact that widespread use of LLMs across our daily and online lives will have.

One key area where they’re already having and are bound to have much more of an impact is user-generated content. We’ve seen companies already deploying first-party integrations that start to embrace some of these uses, like Artifact with its AI-generated thumbnails for shared posts, and Meta adding chatbots to basically everything. Musk is debuting Grok on X as a feature reserved for Premium+ subscribers initially, with a rollout supposedly beginning this week.

Contrary to reports, OpenAI probably isn’t building humanity-threatening AI

Has OpenAI invented an AI technology with the potential to “threaten humanity”? From some of the recent headlines, you might be inclined to think so.

Reuters and The Information first reported last week that several OpenAI staff members had, in a letter to the AI startup’s board of directors, flagged the “prowess” and “potential danger” of an internal research project known as “Q*.” This AI project, according to the reporting, could solve certain math problems — albeit only at grade-school level — but had in the researchers’ opinion a chance of building toward an elusive technical breakthrough.

There’s now debate as to whether OpenAI’s board ever received such a letter — The Verge cites a source suggesting that it didn’t. But the framing of Q* aside, Q* in actuality might not be as monumental — or threatening — as it sounds. It might not even be new.

Tech Leaders Collaborate On Generative AI For Accelerated Chip Design

Artificial intelligence has been steadily infused into various parts of the Synopsys EDA tool suite for the last few years. What started in 2021 with DSO.ai, a tool created to accelerate, enhance and reduce the costs associated with the place-and-route stage of semiconductor design (sometimes called PnR or floor planning), has been expanded across the company’s complete toolchain, called Synopsys.ai suite, which covers virtually the entire chip development workflow. Despite its numerous successes in recent years, however, Synopsys isn’t done innovating. Last week the company announced a strategic collaboration with Microsoft to deliver what it is calling the Synopsys.ai Copilot, a contextually-aware generative AI (GenAI) tool that assists human design teams via conversational intelligence using natural language.

If you missed the previous announcement, the Synopsys.ai Copilot is powered by OpenAI technology running on Microsoft’s Azure on-demand, high-performance cloud infrastructure. Synopsys.ai Copilot is meant to alleviate much of the grunt work required of engineers during RTL generation and verification, similar to the way ChatGPT’s conversational AI capabilities have brought productivity improvements to numerous other industries. Chip designers can effectively ask the Synopsys.ai Copilot questions in plain English, to gain insights into results, produce documentation, or ascertain information about a myriad of other criteria. Today Synopsys revealed that AMD, Intel and Microsoft are already working with the Synopsys.ai GenAI capabilities on various designs.

The generative AI of the Synopsys.ai Copilot enables a number of new capabilities. Collaborative tools can offer engineers guidance on everything from design tools to EDA workflows, and it can provide quick analysis of results. The aria-label="Synopsys.ai Copilot”>Synopsys.ai Copilot can also expedite development of RTL (register-transfer level abstraction), formal verification assertion creation, UVM test benches, and layout design. Synopsys.ai Copilot will also enable end-to-end workflow creation using natural language across the Synopsys.ai suite.

Men 2X More Likely To Use Generative AI Than Women: Report

OpenAI’s ChatGPT gets 60 times the traffic of Google’s conversational generative AI engine Bard and boasts industry-leading 30-minutes session times.


I asked AI why that might be, and one potential reason ChatGPT gave me for the disparity: historically higher representation of men in the tech sector, which is likely to be the source of most early adopters. When I also asked on Twitter/X why the disparity might exist, one person cited the fact that historically AI tools like Alexa, Siri and Cortana have often had female names, playing into cultural stereotypes about women helping men. The modern generative AI tools, of course, have names like ChatGPT or Bard or Perplexity or MidJourney. So perhaps that is changing.

Another perfectly valid reason one woman cited: perhaps they just don’t want to.

“To get the AI to work, you have to think like it and do the prompts like it wants,” says author and artist Catherine Fitzpatrick. “Tiresome.”

Hyperwar Ascendant: The Global Race For Autonomous Military Supremacy

In my 2015 exploration with General John R. Allen on the concept of Hyperwar, we recognized the potential of artificial intelligence to unalterably change the field of battle. Chief among the examples of autonomous systems were drone swarms, which are both a significant threat and a critical military capability. Today, Hyperwar seems to be the operative paradigm accepted by militaries the world over as a de facto reality. Indeed, the observe-orient-decide-act (OODA) loop is collapsing. Greater autonomy is being imbued in all manner of weapon systems and sensors. Work is ongoing to develop systems that further decrease reaction times and increase the mass of autonomous systems employed in conflict. This trend is highlighted potently by the U.S. Replicator initiative and China’s swift advancements in automated manufacturing and missile technologies.

The U.S. Replicator Initiative: A Commitment to Autonomous Warfare?

The Pentagon’s “Replicator” initiative is a strategic move to counter adversaries like China by rapidly producing “attritable autonomous systems” across multiple domains. Deputy Secretary of Defense Kathleen Hicks emphasized the need for platforms that are “small, smart, cheap, and many,” planning to produce thousands of such systems within 18 to 24 months. The Department of Defense, under this initiative, is developing smaller, more intelligent, and cost-effective platforms, a move that aligns with the creation of a Hyperwar environment.