Toggle light / dark theme

“We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren.


A recent study presented at the 7th Annual Conference on Robotic Learning examines a new method for teaching robots how to ask for further instructions when carrying out tasks with the goal of improving robotic safety and efficiency. This study was conducted by a team of engineers from Google and Princeton University and holds the potential to design and build better-functioning robots that mirror human traits, such as humility. Engineers have recently begun using large language models, or LLMs—which is responsible for designing ChatGPT—to make robots more human-like, but this can also come with drawbacks, as well.

“Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Dr. Anirudha Majumdar, who is an assistant professor of mechanical and aerospace engineering at Princeton University and a co-author on the study.

For the study, the researchers used this LLM method with robotic arms in laboratories in New York City and Mountain View, California. For the experiments, the robots were asked to perform a series of tasks like placing bowls in the microwave or re-arranging items on a counter. The LLM algorithm assigned probabilities on which would be the best option based on the instructions, and promptly asked for help when a certain probability threshold was achieved. For example, the human would ask the robot to place one of two bowls in the microwave but would not say which one. The LLM algorithm would then trigger, causing the robot to ask for additional help.

LONDON, Nov 30 (Reuters) — The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

Within a tumultuous week of November 21 for OpenAI—a series of uncontrolled outcomes, each with its own significance—one would not have predicted the outcome that was to be the reinstatement of Sam Altman as CEO of OpenAI, with a new board in tow —all in five days.

While it’s still unclear the official grounds of Sam Altman’s lack of transparency to the board, and the ultimate distrust that led to his ousting, what was apparent was Microsoft’s complete backing of Altman, and the ensuing lack of support for the original board and its decision. It now leaves everyone to question why a board that had control of the company was unable to effectively oust an executive given its members legitimate safety concerns? And, why was a structure that was put in place to mitigate the risk of unilateral control over artificial general intelligence usurped by an investor—the very entity the structure was designed to guard against?

The explosive growth of generative AI over the last year has been truly phenomenal. Kick-started by the public release of ChatGPT (was it really only a year ago?), it’s now everywhere. Keen to ride the wave, every app from Office to eBay has been adding generative capabilities, and growing numbers of us are finding uses for it in our everyday and professional lives.

Given its nature, it’s not surprising that content creators, in particular, have found it a powerful addition to their toolset. Marketing agencies, advertising creatives, news organizations and social media influencers have been among the most enthusiastic early adopters.

While it brings great opportunities for improving efficiency and automating manual, repetitive elements of creative work, it also throws up significant challenges. Issues around copyright, spam content, hallucination, the formulaic nature of algorithmic creation and bias all need to be considered by professionals planning on adopting it into their workflow.