Toggle light / dark theme

Breaking Barriers in Surface Chemistry: The autoSKZCAM Framework for Ionic Materials

Understanding and predicting chemical reactions on surfaces lies at the heart of modern materials science. From heterogeneous catalysis to energy storage and greenhouse gas sequestration, surface chemistry defines the efficiency and viability of advanced technologies. Yet, computationally modeling these processes with both accuracy and efficiency has been a grand challenge.

A recent study published in Nature Chemistry introduces a breakthrough: the autoSKZCAM framework, an automated and open-source method that applies correlated wavefunction theory (cWFT) to surfaces of ionic materials at costs comparable to density functional theory (DFT). This achievement not only bridges the accuracy gap but also enables routine, large-scale studies of surface processes with chemical accuracy.

Shape-changing soft material for soft robotics, smart textiles and more

Harvard researchers developed liquid crystal elastomers that can switch between multiple shapes — chevrons, flat layers, and coils — in response to heat.

By aligning molecules in different directions, the material can be programmed to morph into domes, saddles, or fin-like motions inspired by stingrays and jellyfish.

The shape-shifting material could advance applications in soft robotics, biomedical devices, and smart textiles.

Liquid crystal elastomers are a class of soft materials that can change shape in response to stimuli such as light or heat — making them promising for applications in soft robotics, wearable and biomedical devices, smart textiles and more. But designing compositionally uniform elastomers that can change into different shapes in response to just one stimulus has been challenging and has limited the application of these potentially powerful materials.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a way to program liquid crystal elastomers with the ability to deform in opposite directions just by heating — opening up a range of applications.

The research was published in Science.

(May be a repost from 2024)

A Wearable Robot That Learns

Having lived with an ALS diagnosis since 2018, Kate Nycz can tell you firsthand what it’s like to slowly lose motor function for basic tasks. “My arm can get to maybe 90 degrees, but then it fatigues and falls,” the 39-year-old said. “To eat or do a repetitive motion with my right hand, which was my dominant hand, is difficult. I’ve mainly become left-handed.”

People like Nycz who live with a neurodegenerative disease like ALS or who have had a stroke often suffer from impaired movement of the shoulder, arm or hands, preventing them from daily tasks like tooth-brushing, hair-combing or eating.

For the last several years, Harvard bioengineers have been developing a soft, wearable robot that not only provides movement assistance for such individuals but could even augment therapies to help them regain mobility.

But no two people move exactly the same way. Physical motions are highly individualized, especially for the mobility-impaired, making it difficult to design a device that works for many different people.

It turns out advances in machine learning can create a more personal touch. Researchers in the John A. Paulson School of Engineering and Applied Sciences (SEAS), together with physician-scientists at Massachusetts General Hospital and Harvard Medical School, have upgraded their wearable robot to be responsive to an individual user’s exact movements, endowing the device with more personalized assistance that could give users better, more controlled support for daily tasks.


Disordered-guiding photonic chip enabled high-dimensional light field detection

Intensity, polarization, and spectrum of light, as distinct dimensional characteristics, provide a comprehensive understanding of light-matter interaction and are crucial across nearly all domains of optical science and technology1,2,3,4. For instance, the polarization information5 is critical for determining material composition and surface texture, whereas spectral analysis is instrumental in medical diagnosis and wavelength-division optical communication6. As modern technology rapidly advances, the demand for comprehensive detection of high-dimensional light field continues to grow7,8.

Conventional detection devices typically measure either spectrum or polarization of input light, sacrificing the valuable information from other dimensions. A common solution is to incorporate multiple discrete diffraction elements and optical filters to separately distinguish light with different polarization and wavelength9,10,11,12. However, this leads to bulky and time-consuming systems. Recently, several integrated high-dimensional detectors based on optical metasurfaces13 have been proposed, and the typical representative relies on mapping different dimensional information into distinct locations, using position and intensity distributions for light field detection14,15. However, as the number of detection parameters increases, the signal crosstalk between different information at different spatial locations become pronounced16,17,18. Another type of detector, based on computational reconstruction, maps light field into a series of outputs, encoding the entire high-dimensional information rather than isolating individual dimension19,20,21,22. Nevertheless, these systems are generally restricted to detecting light fields at a few values with low resolution in each dimension, such as limited polarization and wavelength channels, due to the limited internal degrees freedom in the encoding devices23,24. Additionally, most of them rely on commercial cameras, inevitably requiring numerous detector arrays25. Consequently, achieving fully high-dimensional characterization of arbitrary complex light field with a compact and efficient system remains challenging.

In this work, we propose and demonstrate an on-chip high-dimensional detection system capable of characterizing broadband spectrum along with arbitrary varying full-Stokes polarization through single-shot measurement. The high-dimensional input is encoded into multi-channel intensities through the uniquely designed disordered-guiding chip, and decoded by a multilayer perceptron (MLP) neural network (Fig. 1). The core disordered region introduces complex interference between two separate orthogonal polarization components and multiple scattering to enhance the dispersion effect, enabling rich polarization and spectrum responses. Whereas, the surrounding guiding region based on inverse-design directs the input light to the on-chip photodetectors (PDs), improving the transmittance and detection efficiency. With the assistance of neural network for decoding, we achieve reconstruction of full-Stokes polarization and broadband spectrum with a single measurement. It reveals a high spectral sensitivity of 400 pm with average spectral error of 0.083, and polarization error of 1.2°. Furthermore, we demonstrate a high-dimensional imaging system, exhibiting superior imaging and recognition capabilities compared to conventional single-dimensional detectors. This demonstration holds promising potential for future imaging and sensing applications.

Machine Learning Interatomic Potentials in Computational Materials

Machine learning interatomic potentials (MLIPs) have become an essential tool to enable long-time scale simulations of materials and molecules at unprecedented accuracies. The aim of this collection is to showcase cutting-edge developments in MLIP architectures, data generation techniques, and innovative sampling methods that push the boundaries of accuracy, efficiency, and applicability in atomic-scale simulations.

/* */