Toggle light / dark theme

Large Language Models (LLMs) have shown great capabilities in various natural language tasks such as text summarization, question answering, generating code, etc., emerging as a powerful solution to many real-world problems. One area where these models struggle, though, is goal-directed conversations where they have to accomplish a goal through conversing, for example, acting as an effective travel agent to provide tailored travel plans. In practice, they generally provide verbose and non-personalized responses.

Models trained with supervised fine-tuning or single-step reinforcement learning (RL) commonly struggle with such tasks as they are not optimized for overall conversational outcomes after multiple interactions. Moreover, another area where they lack is dealing with uncertainty in such conversations. In this paper, the researchers from UC Berkeley have explored a new method to adapt LLMs with RL for goal-directed dialogues. Their contributions include an optimized zero-shot algorithm and a novel system called imagination engine (IE) that generates task-relevant and diverse questions to train downstream agents.

Since the IE cannot produce effective agents by itself, the researchers utilize an LLM to generate possible scenarios. To enhance the effectiveness of an agent in achieving desired outcomes, multi-step reinforcement learning is necessary to determine the optimal strategy. The researchers have made one modification to this approach. Instead of using any on-policy samples, they used offline value-based RL to learn a policy from the synthetic data itself.

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.

These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used.

Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real time to ensure no areas have too much or too little material.

Greg Brockman, OpenAI co-founder, is also joining Microsoft to lead a new advanced AI research team.

Microsoft is hiring former OpenAI CEO Sam Altman and co-founder Greg Brockman.


Altman was fired from OpenAI on Friday, after the board said it “no longer has confidence in his ability to continue leading OpenAI.” After a weekend of negotiations to potentially bring Altman back to OpenAI, Microsoft CEO Satya Nadella announced that both Sam Altman and Greg Brockman will be joining to lead Microsoft’s new advanced AI research team.

“We’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team,” says Nadella. “We look forward to moving quickly to provide them with the resources needed for their success.”

On Wednesday, a collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.