Robotics goes mainstream. From humanoids to AVs, Physical AI is poised to reshape labor and create massive new investment opportunities.
Given the recent explosion of large language models (LLMs) that can make convincingly human-like statements, it makes sense that there’s been a deepened focus on developing the models to be able to explain how they make decisions. But how can we be sure that what they’re saying is the truth?
In a new paper, researchers from Microsoft and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) propose a novel method for measuring LLM explanations with respect to their “faithfulness”—that is, how accurately an explanation represents the reasoning process behind the model’s answer.
As lead author and Ph.D. student Katie Matton explains, faithfulness is no minor concern: if an LLM produces explanations that are plausible but unfaithful, users might develop false confidence in its responses and fail to recognize when recommendations are misaligned with their own values, like avoiding bias in hiring.
From a data platform perspective, teams responsible for maintaining efficient, scalable infrastructure can benefit from M1’s support for structured function calling and its compatibility with automated pipelines. Its open-source nature allows teams to tailor performance to their stack without vendor lock-in.
Security leads may also find value in evaluating M1’s potential for secure, on-premises deployment of a high-capability model that doesn’t rely on transmitting sensitive data to third-party endpoints.
Taken together, MiniMax-M1 presents a flexible option for organizations looking to experiment with or scale up advanced AI capabilities while managing costs, staying within operational limits, and avoiding proprietary constraints.
Nature Communications paper.
Paper link: https://www.nature.com/articles/s41467-024-55157-2
PDF link: https://rdcu.be/d7B8C
This paper proposes a highly dexterous and compliant aerial continuum manipulator (Aerial Elephant Trunk). We have proposed the design, designed the shape estimation method, developed a feedback controller, and proposed a whole-body motion planning module such that the UAV and the continuum manipulator could carry out tasks as a whole.
AET can perform various challenging aerial manipulation tasks, including but not limited to:
1) grasping object of various sizes and shapes;
2) traversing constrained pipelines with various shapes;
3) aerial writing/painting;
4) performing manipulation in various complex environments.
#robot #drone #uav #airplane #robotics #artificialintelligence #technology #learning #deeplearning @UAVfutures @fpvdrones @meninododronefpv @Thedroneracingleague @RobotFutureAI
Although AI-based restoration methods can indeed bring new life to damaged paintings, the end result is typically a digital copy of the original painting. By contrast, a new MIT technique applies reversible repairs to the physical painting itself, in the form of a removable mask.
The process was developed by mechanical engineering graduate student Alex Kachkine, who restores paintings via traditional hand-painting techniques as a hobby.
He realized that many galleries have a number of paintings which never get displayed, because they require restoration that would take too long – and thus be too expensive – to perform by hand. Utilizing his method, however, restoration times could be reduced from years, months or weeks down to a matter of hours.
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.
This “position bias” means that if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.
MIT researchers have discovered the mechanism behind this phenomenon.