Toggle light / dark theme

The chatbot’s reasoning was “at times medically implausible or inconsistent, which can lead to misinformation or incorrect diagnosis, with significant implications,” the report noted.

The scientists also admitted some shortcomings with the research. The sample size was small, with 30 cases examined. In addition, only relatively simple cases were looked at, with patients presenting a single primary complaint.

It was not clear how well the chatbot would fare with more complex cases. “The efficacy of ChatGPT in providing multiple distinct diagnoses for patients with complex or rare diseases remains unverified.”

Today’s blog is from guest contributors Alaric Wilson, Senior ISV Partner Development Manager, and Michael Gillett, Partner Technology Strategy Manager.

In the era of AI, every app has the potential to be intelligent. Independent Software Vendors (ISVs) are facing increasing pressure from customers to deliver innovative solutions that meet their demands with a more dynamic user experience. To stay competitive, ISVs are turning to cutting-edge technologies like generative AI to unlock new possibilities for their software development process. Azure OpenAI Service, powered by OpenAI’s advanced language models, is revolutionizing how ISVs innovate, providing them with unprecedented capabilities to create intelligent, adaptive, and highly customized applications.

In today’s blog, we’re sharing recent resources and examples, to help ISV partners learn more about the opportunities to leverage generative AI on Azure OpenAI Service and fuel customers’ innovation efforts.

Mapping molecular structure to odor perception is a key challenge in olfaction. Here, we use graph neural networks (GNN) to generate a Principal Odor Map (POM) that preserves perceptual relationships and enables odor quality prediction for novel odorants. The model is as reliable as a human in describing odor quality: on a prospective validation set of 400 novel odorants, the model-generated odor profile more closely matched the trained panel mean (n=15) than did the median panelist. Applying simple, interpretable, theoretically-rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.

One-Sentence Summary An odor map achieves human-level odor description performance and generalizes to diverse odor-prediction tasks.

The authors have declared no competing interest.

The idea that genetic modification can improve humanity isn’t new, but it has taken some interesting turns within the scientific community over the past few years. One of the most notable comes from the mind of He Jiankui, a Chinese scientist whose gene editing of human babies led to infamy and a prison sentence. Now, He, known as JK to friends, thinks that gene-edited humans could be the future of our species.

Sign up for the most interesting tech & entertainment news out there.

In a new report, the federal department charged with analyzing how efficiently US taxpayer dollars are spent, the Government Accountability Office, says NASA lacks transparency on the true costs of its Space Launch System rocket program.

Published on Thursday, the new report (see.pdf) examines the billions of dollars spent by NASA on the development of the massive rocket, which made a successful debut launch in late 2022 with the Artemis I mission. Surprisingly, as part of the reporting process, NASA officials admitted the rocket was too expensive to support its lunar exploration efforts as part of the Artemis program.

“Senior NASA officials told GAO that at current cost levels, the SLS program is unaffordable,” the new report states.

Artificial intelligence (AI) large language models (LLM) like OpenAI’s hit GPT-3, 3.5, and 4, encode a wealth of information about how we live, communicate, and behave, and researchers are constantly finding new ways to put this knowledge to use.

A recent study conducted by Stanford University researchers has demonstrated that, with the right design, LLMs can be harnessed to simulate human behavior in a dynamic and convincingly realistic manner.

The study, titled “Generative Agents: Interactive Simulacra of Human Behavior,” explores the potential of generative models in creating an AI agent architecture that remembers its interactions, reflects on the information it receives, and plans long-and short-term goals based on an ever-expanding memory stream. These AI agents are capable of simulating the behavior of a human in their daily lives, from mundane tasks to complex decision-making processes.

Are large language models sentient? If they are, how would we know?

As a new generation of AI models have rendered the decades-old measure of a machine’s ability to exhibit human-like behavior (the Turing test) obsolete, the question of whether AI is ushering in a generation of machines that are self-conscious is stirring lively discussion.

Former Google software engineer Blake Lemoine suggested the large language model LaMDA was sentient.