Toggle light / dark theme

AI designs new underwater gliders with shapes inspired by marine animals

Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient aquatic navigation (or hydrodynamics), so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life—the go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin-Madison propose that AI could help us explore uncharted glider designs more conveniently. The research is published on the arXiv preprint server.

Playing games with robots makes people see them as more humanlike

The more we interact with robots, the more human we perceive them to become—according to new research from the University of East Anglia, published in the Journal of Experimental Psychology: Human Perception and Performance.

It may sound like a scene from Blade Runner, but psychologists have been investigating exactly what makes interactions feel more human.

The paper reveals that playing games with robots to “break the ice” can help bring out their human side.

Quantum machine learning improves semiconductor manufacturing for first time

Semiconductor processing is notoriously challenging. It is one of the most intricate feats of modern engineering due to the extreme precision required and the hundreds of steps involved, such as etching and layering, to make even a single chip.

AI and biophysics unite to forecast high-risk viral variants before outbreaks

When the first reports of a new COVID-19 variant emerge, scientists worldwide scramble to answer a critical question: Will this new strain be more contagious or more severe than its predecessors? By the time answers arrive, it’s frequently too late to inform immediate public policy decisions or adjust vaccine strategies, costing public health officials valuable time, effort, and resources.

In a pair of recent publications in Proceedings of the National Academy of Sciences, a research team in the Department of Chemistry and Chemical Biology combined biophysics with artificial intelligence to identify high-risk viral variants in record time—offering a transformative approach for handling pandemics. Their goal: to get ahead of a virus by forecasting its evolutionary leaps before it threatens public health.

“As a society, we are often very unprepared for the emergence of new viruses and pandemics, so our lab has been working on ways to be more proactive,” said senior author Eugene Shakhnovich, Roy G. Gordon Professor of Chemistry. “We used fundamental principles of physics and chemistry to develop a multiscale model to predict the course of evolution of a particular variant and to predict which variants will become dominant in populations.”

OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’

(AP) — Ilya Sutskever, one of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he’s starting a safety-focused artificial intelligence company.

Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post Wednesday that he’s created Safe Superintelligence Inc. with two co-founders. The company’s only goal and focus is safely developing “superintelligence” — a reference to AI systems that are smarter than humans.

The company vowed not to be distracted by “management overhead or product cycles,” and under its business model, work on safety and security would be “insulated from short-term commercial pressures,” Sutskever and his co-founders Daniel Gross and Daniel Levy said in a prepared statement.

AI helps discover optimal new material for removing radioactive iodine contamination

Managing radioactive waste is one of the core challenges in the use of nuclear energy. In particular, radioactive iodine poses serious environmental and health risks due to its long half-life (15.7 million years in the case of I-129), high mobility, and toxicity to living organisms.

A Korean research team has successfully used artificial intelligence to discover a new material that can remove iodine for nuclear environmental remediation. The team plans to push forward with commercialization through various industry–academia collaborations, from iodine-adsorbing powders to contaminated water treatment filters.

Professor Ho Jin Ryu’s research team from the Department of Nuclear and Quantum Engineering, in collaboration with Dr. Juhwan Noh of the Digital Chemistry Research Center at the Korea Research Institute of Chemical Technology, developed a technique using AI to discover new materials that effectively remove contaminants. Their research is published in the Journal of Hazardous Materials.

Senate Votes to Allow State A.I. Laws, a Blow to Tech Companies

There are no federal laws regulating A.I. but states have enacted dozens of laws that strengthen consumer privacy, ban A.I.-generated child sexual abuse material and outlaw deepfake videos of political candidates. All but a handful of states have some laws regulating artificial intelligence in place. It is an area of deep interest: All 50 have introduced bills in the past year tied to the issue.

The Senate’s provision, introduced in the Senate by Senator Ted Cruz, Republican of Texas, sparked intense criticism by state attorneys general, child safety groups and consumer advocates who warned the amendment would give A.I. companies a clear runway to develop unproven and potentially dangerous technologies.

Could Google’s Veo 3 be the start of playable world models?

Demis Hassabis, CEO of Google’s AI research organization DeepMind, appeared to suggest Tuesday evening that Veo 3, Google’s latest video-generating model, could potentially be used for video games.

In response to a post on X beseeching Google to “Let me play a video game of my veo 3 videos already,” and asking, “playable world models wen?” Hassabis responded, “now wouldn’t that be something.”

On Wednesday morning, Logan Kilpatrick, lead product for Google’s AI Studio and Gemini API, chimed in with a reply: “🤐🤐🤐🤐”