Toggle light / dark theme

The OpenDAC project is a collaborative research project between Fundamental AI Research (FAIR) at Meta and Georgia Tech, aimed at significantly reducing the cost of Direct Air Capture (DAC).

Direct Air Capture (DAC) involves directly capturing carbon dioxide from the atmosphere and has been widely recognized as a crucial tool in combating climate change. Despite its potential, the broad implementation of DAC has been impeded by high capture costs. Central to overcoming this hurdle is the discovery of novel sorbents — materials that pull carbon dioxide from the air. Discovering new sorbents holds the key to reducing capture costs and scaling DAC to meaningfully impact global carbon emissions.

The DAC space is growing rapidly with many companies entering the space. To engage the broader research community as well as the budding DAC industry, we have released the OpenDAC 2023 (ODAC23) dataset to train ML models. ODAC23 contains nearly 40M DFT calculations from 170K DFT relaxations involving Metal Organic Frameworks (MOFs) with carbon dioxide and water adsorbates. We have also released baseline ML models trained on this dataset.

Large language models (LLMs), deep learning-based models trained to generate, summarize, translate and process written texts, have gained significant attention after the release of Open AI’s conversational platform ChatGPT. While ChatGPT and similar platforms are now widely used for a wide range of applications, they could be vulnerable to a specific type of cyberattack producing biased, unreliable or even offensive responses.

Researchers at Hong Kong University of Science and Technology, University of Science and Technology of China, Tsinghua University and Microsoft Research Asia recently carried out a study investigating the potential impact of these attacks and techniques that could protect models against them. Their paper, published in Nature Machine Intelligence, introduces a new psychology-inspired technique that could help to protect ChatGPT and similar LLM-based conversational platforms from cyberattacks.

“ChatGPT is a societally impactful artificial intelligence tool with millions of users and integration into products such as Bing,” Yueqi Xie, Jingwei Yi and their colleagues write in their paper. “However, the emergence of attacks notably threatens its responsible and secure use. Jailbreak attacks use adversarial prompts to bypass ChatGPT’s ethics safeguards and engender harmful responses.”

A Northwestern University team has demonstrated a remarkable new way to generate electricity, with a paperback-sized device that nestles in soil and harvests power created as microbes break down dirt – for as long as there’s carbon in the soil.

Microbial fuel cells, as they’re called, have been around for more than 100 years. They work a little like a battery, with an anode, cathode and electrolyte – but rather than drawing electricity from chemical sources, they work with bacteria that naturally donate electrons to nearby conductors as they chow down on soil.

The issue thus far has been keeping them supplied with water and oxygen, while being buried in the dirt. “Although MFCs have existed as a concept for more than a century, their unreliable performance and low output power have stymied efforts to make practical use of them, especially in low-moisture conditions,” said UNW alumnus and project lead Bill Yen.

Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years.

Beyond its use in deep learning, backpropagation is a powerful computational tool in many other areas, ranging from weather forecasting to analyzing numerical stability – it just goes by different names. In fact, the algorithm has been reinvented at least dozens of times in different fields (see Griewank (2010)). The general, application independent, name is “reverse-mode differentiation.”

Fundamentally, it’s a technique for calculating derivatives quickly. And it’s an essential trick to have in your bag, not only in deep learning, but in a wide variety of numerical computing situations.

Can the intrinsic physics of multicomponent systems show neural network like #Computation? A new study shows how molecules draw on the rules of #physics to perform computations similar to neural networks:


Examination of nucleation during self-assembly of multicomponent structures illustrates how ubiquitous molecular phenomena inherently classify high-dimensional patterns of concentrations in a manner similar to neural network computation.

North Korea claimed to have launched a new solid-fuel, intermediate-range missile with a hypersonic warhead, aiming to test its reliability and maneuverability. The missile, designed to strike U.S. military bases in Guam and Japan, flew approximately 620 miles before landing between the Korean Peninsula and Japan. The test follows a previous claim of successfully testing […] The post North Korea Unveils New Missile Designed for US Mainland…

They’re two great tastes that taste great together. Or rather, they’re two technologies that, put together in collaborative ways, are becoming much more powerful!

Marvin Minsky famously said that the brain is not one computer, but several hundred computers working in tandem. If that’s true, ChatGPT’s cognitive power just got a boost with the creation of a Wolfram Alpha plug-in that allows for the two systems to send and receive natural language input, so that ChatGPT systems can utilize a different system of symbolic representation that had already been pioneered before the days when we could just ask a computer to write an essay.

We heard early this year that teams were working on this merge, and it’s been interesting to the AI community. Now it’s come to fruition.