Toggle light / dark theme

Brooklyn-based studio Modu has employed a series of techniques that lower ambient air temperature in order to help cool the interior and exterior of this Houston building.

Modu inserted pocket gardens, vertical fins, trellises and fluted concrete walls along the length of the exterior in order to create “outdoor comfort” and reduce Houston heat.

The Promenade building is a 15,000-square-foot (1,400 square metre) centre which will host wellness and health clients throughout several offices.

Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.

There is the technological question of whether computers can be intelligent, and also the scientific question of how it is that humans and other animals are intelligent. Answering either question requires an agreement about what the word ‘intelligence’ means. Here, I will both follow common usage and avoid making it a matter of definition that only adult humans could possibly be intelligent by assuming that to be intelligent is to have the ability to solve complex and cognitively demanding problems. If we understand intelligence this way, the question of whether computers can be intelligent has already been answered. With apologies to Dreyfus and Lanier, it has been clear for years that the answer is an emphatic ‘yes’. The recent advances made by ChatGPT and other large language models (LLMs) are the cherry on top of decades of technological innovation.

Researchers confront a formidable challenge within the expansive domain of materials science—efficiently distilling essential insights from densely packed scientific texts. This intricate dance involves navigating complex content and generating coherent question-answer pairs that encapsulate the core of the material. The complexity lies in the substantial task of extracting pivotal information from the dense fabric of scientific texts, requiring researchers to craft meaningful question-answer pairs that capture the essence of the material.

Current methodologies within this domain often lean on general-purpose language models for information extraction. However, these approaches need help with text refinement and the accurate incorporation of equations. In response, a team of MIT researchers introduced MechGPT, a novel model grounded in a pretrained language model. This innovative approach employs a two-step process, utilizing a general-purpose language model to formulate insightful question-answer pairs. Beyond mere extraction, MechGPT enhances the clarity of key facts.

The journey of MechGPT commences with a meticulous training process implemented in PyTorch within the Hugging Face ecosystem. Based on the Llama 2 transformer architecture, the model flaunts 40 transformer layers and leverages rotary positional embedding to facilitate extended context lengths. Employing a paged 32-bit AdamW optimizer, the training process attains a commendable loss of approximately 0.05. The researchers introduce Low-Rank Adaptation (LoRA) during fine-tuning to augment the model’s capabilities. This involves integrating additional trainable layers while freezing the original pretrained model, preventing the model from erasing its initial knowledge base. The result is heightened memory efficiency and accelerated training throughput.

An advancement in neutron shielding, a critical aspect of radiation protection, has been achieved. This breakthrough is poised to revolutionize the neutron shielding industry by offering a cost-effective solution applicable to a wide range of materials surfaces.

A research team, led by Professor Soon-Yong Kwon in the Graduate School of Semiconductors Materials and Devices Engineering and the Department of Materials Science and Engineering at UNIST has successfully developed a neutron shielding film capable of blocking neutrons present in radiation. This innovative shield is not only available in large areas but also lightweight and flexible.

The team’s paper is published in the journal Nature Communications.

The simple story line that ‘Gell-Mann and Zweig invented quarks in 1964 and the quark model was generally accepted after 1968 when deep inelastic electron scattering experiments at SLAC showed that they are real’ contains elements of the truth, but is not true. This paper describes the origins and development of the quark model until it became generally accepted in the mid-1970s, as witnessed by a spectator and some-time participant who joined the field as a graduate student in October 1964. It aims to ensure that the role of Petermann is not overlooked, and Zweig and Bjorken get the recognition they deserve, and to clarify the role of Serber.

This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next.


Moritz Hocher.

Traditional systems use nozzles to deposit tiny drops of resin, smoothed over with a scraper or roller and then curved with UV light. However, this smoothing limits the materials that could be used since slow-curing resins could be squished or smeared.