Tens of thousands of living brain cells have been used to build a simple computer that can recognise patterns of light and electricity. It could eventually be used in robotics.
A team of physicists and material scientists at the University of Colorado has developed a way to better insulate double-paned glass used for windows by adding a transparent aerogel. In their paper published in the journal Nature Energy the group describes how their aerogel is made and how much of a boost in energy efficiency can be expected from windows using the material. Nature Energy has also published a Research Briefing in the same journal issue that outlines the work done by the team.
Since most homeowners prefer to have windows that allow them to see outside, heat loss is inevitable. Over the past several decades, heat loss from windows has been improved by adding a second pane of glass —the two panes are typically separated by a gap of insulating air. Still, such windows do not provide nearly the same degree of insulation as insulated walls. In this new approach, the team in Colorado has come up with a way to improve the insulation properties of double-paned glass.
To make the aerogel (a gel with pockets of air in it), the research team soaked nanofibers of cellulose extracted from wood in water. Next, the wood nanofibers were removed and were then dunked in an ethanol solution. Once saturated, the nanofibers were heated in a pressurized oven—this forced the ethanol pockets to be replaced with air. Next, the nanofibers, which were transparent, were coated with a water-repellent material to prevent condensation when situated between panes of glass.
Researchers have found that an activated form of the hunger hormone ghrelin can help people with heart failure by increasing the heart’s pump capacity.
8 years of cost reduction in 5 weeks: how Stanford’s Alpaca model changes everything, including the economics of OpenAI and GPT 4. The breakthrough, using self-instruct, has big implications for Apple’s secret large language model, Baidu’s ErnieBot, Amazon’s attempts and even governmental efforts, like the newly announced BritGPT.
I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4. And what are the implications of short-circuiting human annotation like this? With analysis of a tweet by Eliezer Yudkowsky, I delve into the workings of the model and the questions it rises.
Web Demo: https://alpaca-ai0.ngrok.io/
Alpaca: https://crfm.stanford.edu/2023/03/13/alpaca.html.
Ark Forecast: https://research.ark-invest.com/hubfs/1_Download_Files_ARK-I…_Final.pdf.
Eliezer Tweet: https://twitter.com/ESYudkowsky/status/1635577836525469697
Dear silly Internet saying “but knowledge distillation is already known”: the key idea here is that the fine-tuning above the base model is comparatively much *easier* and *cheaper* to extract and re-imbue, if you have a new base comparable to the earlier base model.
(Spelling…
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) March 14, 2023
Self-Instruct: https://arxiv.org/pdf/2212.10560.pdf.
InstructGPT: https://openai.com/research/instruction-following.
OpenAI Terms: https://openai.com/policies/terms-of-use.
MMLU Test: https://arxiv.org/pdf/2009.03300.pdf.
Apple LLM: https://www.nytimes.com/2023/03/15/technology/siri-alexa-goo…gence.html.
GPT 4 API: https://openai.com/pricing.
Llama Models: https://arxiv.org/pdf/2302.13971.pdf.
BritGPT: https://www.theguardian.com/technology/2023/mar/15/uk-to-inv…wn-britgpt.
Amazon: https://www.businessinsider.com/amazons-ceo-andy-jassy-on-ch…?r=US&IR=T
AlexaTM: https://arxiv.org/pdf/2208.01448.pdf.
Baidu Ernie: https://www.nytimes.com/2023/03/16/world/asia/china-baidu-chatgpt-ernie.html.
PaLM API: https://developers.googleblog.com/2023/03/announcing-palm-ap…suite.html.
In a Phase II trial led by researchers from The University of Texas MD Anderson Cancer Center, adding ipilimumab to a neoadjuvant, or pre-surgical, combination of nivolumab plus platinum-based chemotherapy, resulted in a major pathologic response (MPR) in half of all treated patients with early-stage, resectable non-small cell lung cancer (NSCLC).
New findings from the NEOSTAR trial, published today in Nature Medicine, provide further support for neoadjuvant immunotherapy-based treatment as an approach to reduce viable tumor at surgery and to improve outcomes in NSCLC. The combination also was associated with an increase in immune cell infiltration and a favorable gut microbiome composition.
The current study reports on the latest two arms of the NEOSTAR trial, evaluating neoadjuvant nivolumab plus chemotherapy (double combination) and neoadjuvant ipilimumab plus nivolumab and chemotherapy (triple combination). Both treatment arms met their prespecified primary endpoint boundaries of six or more patients achieving MPR, defined as 10% or less residual viable tumor (RVT) in the resected tumor specimen at surgery, a candidate surrogate endpoint of improved survival outcomes from prior studies.
To effectively tackle everyday tasks, robots should be able to detect the properties and characteristics of objects in their surroundings, so that they can grasp and manipulate them accordingly. Humans naturally achieve this using their sense of touch and roboticists have thus been trying to provide robots with similar tactile sensing capabilities.
A team of researchers at the University of Hong Kong recently developed a new soft tactile sensor that could allow robots to detect different properties of objects that they are grasping. This sensor, presented in a paper pre-published on arXiv, is made up of two layers of weaved optical fibers and a self-calibration algorithm.
“Although there exist many soft and conformable tactile sensors on robotic applications able to decouple the normal force and shear forces, the impact of the size of object in contact on the force calibration model has been commonly ignored,” Wentao Chen, Youcan Yan, and their colleagues wrote in their paper.
Here’s a demo of The Infinite Arcade.
If people like it, I’ll publish the site later today or tomorrow!
A new paper published in the Journal of Medical Internet Research describes how generative models such as DALL-E 2, a novel deep learning model for text-to-image generation, could represent a promising future tool for image generation, augmentation, and manipulation in health care. Do generative models have sufficient medical domain knowledge to provide accurate and useful results? Dr. Lisa C Adams and colleagues explore this topic in their latest viewpoint titled “What Does DALL-E 2 Know About Radiology?”
First introduced by OpenAI in April 2022, DALL-E 2 is an artificial intelligence (AI) tool that has gained popularity for generating novel photorealistic images or artwork based on textual input. DALL-E 2’s generative capabilities are powerful, as it has been trained on billions of existing text-image pairs off the internet.
To understand whether these capabilities can be transferred to the medical domain to create or augment data, researchers from Germany and the United States examined DALL-E 2’s radiological knowledge in creating and manipulating X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images.