Toggle light / dark theme

An advancement in neutron shielding, a critical aspect of radiation protection, has been achieved. This breakthrough is poised to revolutionize the neutron shielding industry by offering a cost-effective solution applicable to a wide range of materials surfaces.

A research team, led by Professor Soon-Yong Kwon in the Graduate School of Semiconductors Materials and Devices Engineering and the Department of Materials Science and Engineering at UNIST has successfully developed a neutron shielding film capable of blocking neutrons present in radiation. This innovative shield is not only available in large areas but also lightweight and flexible.

The team’s paper is published in the journal Nature Communications.

Large Language Models (LLMs) have shown great capabilities in various natural language tasks such as text summarization, question answering, generating code, etc., emerging as a powerful solution to many real-world problems. One area where these models struggle, though, is goal-directed conversations where they have to accomplish a goal through conversing, for example, acting as an effective travel agent to provide tailored travel plans. In practice, they generally provide verbose and non-personalized responses.

Models trained with supervised fine-tuning or single-step reinforcement learning (RL) commonly struggle with such tasks as they are not optimized for overall conversational outcomes after multiple interactions. Moreover, another area where they lack is dealing with uncertainty in such conversations. In this paper, the researchers from UC Berkeley have explored a new method to adapt LLMs with RL for goal-directed dialogues. Their contributions include an optimized zero-shot algorithm and a novel system called imagination engine (IE) that generates task-relevant and diverse questions to train downstream agents.

Since the IE cannot produce effective agents by itself, the researchers utilize an LLM to generate possible scenarios. To enhance the effectiveness of an agent in achieving desired outcomes, multi-step reinforcement learning is necessary to determine the optimal strategy. The researchers have made one modification to this approach. Instead of using any on-policy samples, they used offline value-based RL to learn a policy from the synthetic data itself.

Is reality indistinguishable from information? Is consciousness a self-aware, self-modifying information field? Does information have intrinsic meaning? How does meaningfulness arise? How do sentient and non-sentient entities differ in the way they perceive and process information?…

Optimized for 640 × 360 (16:9 SD). Full script here: http://crackingthenutshell.org.

** Please support my channel by becoming a patron: http://www.patreon.com/crackingthenutshell.

** Or… how about a Paypal Donation? http://crackingthenutshell.org/donate.

Ken, a 36-year-old Uber and Lyft driver in Houston, drives about four to five hours per day — in addition to his full-time analyst job — to supplement his income. Last year, he earned a combined $25,000 driving for Uber and Lyft from about 2,000 trips, according to screenshots of earnings documents viewed by Business Insider.

While he accepts most rides, he said he prioritizes trips that pay at least $0.80 to $1.00 per mile, excluding vehicle expenses — a ride’s base pay and distance are displayed on the app. He also tries to avoid trips that take him too far out of Houston because he worries he won’t be able to find trips for the ride back. He calls these “empty miles.”

“I have seen a 50-mile trip that only $20 was offered,” Ken previously told Business Insider. “I wouldn’t be doing that.” He asked that his last name not be included for fear of professional repercussions.