Toggle light / dark theme

Citation tool offers a new approach to trustworthy AI-generated content

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?

In many cases, AI systems gather external information to use as context when answering a particular query. For example, to answer a question about a medical condition, the system might reference recent research papers on the topic. Even with this relevant context, models can make mistakes with what feels like high doses of confidence. When a model errs, how can we track that specific piece of information from the context it relied on — or lack thereof?

To help tackle this obstacle, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers created ContextCite, a tool that can identify the parts of external context used to generate any particular statement, improving trust by helping users easily verify the statement.


The ContextCite tool from MIT CSAIL can find the parts of external context that a language model used to generate a statement. Users can easily verify the model’s response, making the tool useful in fields like health care, law, and education.

A new way to create realistic 3D shapes using generative AI

Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.

While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.

MIT researchers explored the relationships and differences between the algorithms used to generate 2D images and 3D shapes, identifying the root cause of lower-quality 3D models. From there, they crafted a simple fix to Score Distillation, which enables the generation of sharp, high-quality 3D shapes that are closer in quality to the best model-generated 2D images.


A new AI method enables the generation of sharp, high-quality 3D shapes that are closer to the quality of the best 2D image models. Previous approaches typically generated blurry or cartoonish 3D shapes.

Training all-mechanical neural networks for task learning through in situ backpropagation

Another well-known method for physical learning is Equilibrium Propagation (EP), sharing similar procedure with coupled learning and being able to define the arbitrary differentiable loss function32. This method has been demonstrated in various physical systems, numerically in nonlinear resistor networks33 and coupled phase oscillators34, experimentally on Ising machines35.

So far, the MNNs based on the physical learning have been developed using the platform of origami structures28,36 and disordered networks29,37 to demonstrate machine learning through simulations. The experimental proposals involve using directed springs with variable stiffness38 and manually adjusting the rest length of springs31.

Here, we present a highly-efficient training protocol for MNNs through mechanical analogue of in situ backpropagation, derived from the adjoint variable method, in which theoretically the exact gradient can be obtained from only the local information. By using 3D-printed MNNs, we demonstrate the feasibility of obtaining the gradient of the loss function experimentally solely from the bond elongation of MNNs in only two steps, using local rules, with high accuracy. Besides, leveraging the obtained gradient, we showcase the successful training in simulations of a mechanical network for behaviors learning and various machine learning tasks, achieving high accuracy in both regression and Iris flower classification tasks. The trained MNNs are then validated both numerically and experimentally. In addition, we illustrate the retrainability of MNNs after switching tasks and damage, a feature that may inspire further inquiry into more robust and resilient design of MNNs.

AI Supercharging Crop Breeding to Protect Farmers from Climate

Avalo, a crop development company based in North Carolina, is using machine learning models to accelerate the creation of new and resilient crop varieties.

The traditional way to select for favorable traits in crops is to identify individual plants that exhibit the trait – such as drought resistance – and use those plants to pollinate others, before planting those seeds in fields to see how they perform. But that process requires growing a plant through its entire life cycle to see the result, which can take many years.

Avalo uses an algorithm to identify the genetic basis of complex traits like drought, or pest resistance in hundreds of crop varieties. Plants are cross-pollinated in the conventional way, but the algorithm can predict the performance of a seed without needing to grow it – speeding up the process by as much as 70%, according to Avalo chief technology officer Mariano Alvarez.

/* */