Toggle light / dark theme

The HiP framework developed MIT_CSAIL creates detailed plans for robots using the expertise of three different foundation models, helping it execute tasks in households, factories, and construction that require multiple steps.


“All we want to do is take existing pre-trained models and have them successfully interface with each other,” says Anurag Ajay, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and a CSAIL affiliate. “Instead of pushing for one model to do everything, we combine multiple ones that leverage different modalities of internet data. When used in tandem, they help with robotic decision-making and can potentially aid with tasks in homes, factories, and construction sites.”

These models also need some form of “eyes” to understand the environment they’re operating in and correctly execute each sub-goal. The team used a large video diffusion model to augment the initial planning completed by the LLM, which collects geometric and physical information about the world from footage on the internet. In turn, the video model generates an observation trajectory plan, refining the LLM’s outline to incorporate new physical knowledge.

This process, known as iterative refinement, allows HiP to reason about its ideas, taking in feedback at each stage to generate a more practical outline. The flow of feedback is similar to writing an article, where an author may send their draft to an editor, and with those revisions incorporated in, the publisher reviews for any last changes and finalizes.

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun, professor in the Manning College of Information and Computer Sciences at UMass Amherst and the paper’s senior author.

Organoid intelligence is the growing of mini-brains from human stem cells, which has potential benefits for medical research and treatments.

However, there are significant ethical concerns related to the possibility of creating conscious entities and the potential for misuse. Organoid intelligence could offer valuable insights into neurological diseases, but we must establish a framework for their creation and treatment to ensure ethical use. As we continue to develop this technology, we must approach it with caution due to the potential dire consequences of its misuse.

#organoidintelligence #artificialintelligence #ethics

Artificial intelligence can accelerate the process of finding and testing new materials, and now researchers have used that ability to develop a battery that is less dependent on the costly mineral lithium.

Lithium-ion batteries power many devices that we use every day as well as electric vehicles. They would also be a necessary part of a green electric grid, as batteries are required to store renewable energy from wind turbines and solar panels. But lithium is expensive and mining it damages the environment. Finding a replacement for this crucial metal could be costly and time-consuming, requiring researchers to develop and test millions of candidates over the course of years. Using AI, Nathan Baker at Microsoft and his colleagues accomplished the task in months. They designed and built a battery that uses up to 70 per cent less lithium than some competing designs.

Rishi Sunak needs to decide whether he wants to back the UK’s creative industries or gamble everything on an artificial intelligence boom, the chief executive of Getty Images has said.

Craig Peters, who has led the image library since 2019, spoke out amid growing anger from the creative and media sector at the harvesting of their material for “training data” for AI companies. His company is suing a number of AI image generators in the UK and US for copyright infringement.