Toggle light / dark theme

Mapping molecular structure to odor perception is a key challenge in olfaction. Here, we use graph neural networks (GNN) to generate a Principal Odor Map (POM) that preserves perceptual relationships and enables odor quality prediction for novel odorants. The model is as reliable as a human in describing odor quality: on a prospective validation set of 400 novel odorants, the model-generated odor profile more closely matched the trained panel mean (n=15) than did the median panelist. Applying simple, interpretable, theoretically-rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.

One-Sentence Summary An odor map achieves human-level odor description performance and generalizes to diverse odor-prediction tasks.

The authors have declared no competing interest.

Artificial intelligence (AI) large language models (LLM) like OpenAI’s hit GPT-3, 3.5, and 4, encode a wealth of information about how we live, communicate, and behave, and researchers are constantly finding new ways to put this knowledge to use.

A recent study conducted by Stanford University researchers has demonstrated that, with the right design, LLMs can be harnessed to simulate human behavior in a dynamic and convincingly realistic manner.

The study, titled “Generative Agents: Interactive Simulacra of Human Behavior,” explores the potential of generative models in creating an AI agent architecture that remembers its interactions, reflects on the information it receives, and plans long-and short-term goals based on an ever-expanding memory stream. These AI agents are capable of simulating the behavior of a human in their daily lives, from mundane tasks to complex decision-making processes.

Are large language models sentient? If they are, how would we know?

As a new generation of AI models have rendered the decades-old measure of a machine’s ability to exhibit human-like behavior (the Turing test) obsolete, the question of whether AI is ushering in a generation of machines that are self-conscious is stirring lively discussion.

Former Google software engineer Blake Lemoine suggested the large language model LaMDA was sentient.

Mind mastery refers to intentionally developing self-awareness and discipline to take control of your thought patterns, emotional responses, and behaviors. Rather than operating on autopilot or being swept away by negativity, you respond consciously in alignment with your values and goals. Benefits of mind mastery include reduced stress, achieving ambitions, fulfilled relationships, and overall life satisfaction.

Mastering your mind requires commitment, but small, consistent steps to steward your thoughts and manage your emotions will compound to impact your mental health and empower your life profoundly. Here are key techniques:

Practice observing your thoughts like clouds passing by without reacting or judging. Creating this mental space between stimulus and response allows you to gain perspective. Ask what evidence supports or contradicts anxious thoughts.

Advances in artificial intelligence have prompted extensive public concern about its capacity to contribute to the spread of misinformation, bias, and cybersecurity breaches—and its potential existential threat to humanity. But, if anything, AI can aid human beings in making decisions aimed at improving social equality, safety, productivity—and mitigate some existential threats.

During a conference this week, Tesla CEO Elon Musk reiterated claims that the Full Self-Driving (FSD) beta is nearing higher levels of autonomy. The statements echo recent details learned from previews of a new Musk biography, highlighting the FSD system’s many developments in the last several months alone.

Musk was featured in an interview during the All-In Podcast’s 2023 Summit held on Wednesday, during which he discussed topics like Starlink, X, China, artificial intelligence, and more. Among the topics covered was a brief outro on the FSD beta, which he says is “very close” to becoming safer than a human driver without being monitored.

“Yeah, I think it’s getting very close to being in a situation where, even if there’s no human oversight or intervention, that the probability of a safe journey is higher with FSD and no supervision — like even if you’re asleep in the car — than if a person is driving. We’re very close to that,” Musk said on a video call into the summit.

Researchers unravel the mysteries of smell using machine learning. Their AI model has achieved human-level skill in describing how certain chemicals will smell, closing a critical gap in the scientific understanding of olfaction.

Beyond advancing our comprehension of smell, this technology could lead to breakthroughs in the fragrance and flavor industries, and even help create new functional scents like mosquito repellents. The study validates a first-of-its-kind data-driven map of human olfaction, which correlates chemical structure to odor perception.


Summary: Researchers unravel the mysteries of smell using machine learning. Their AI model has achieved human-level skill in describing how certain chemicals will smell, closing a critical gap in the scientific understanding of olfaction.

15 companies, ranging from Photoshop creator Adobe to ChatGPT maker OpenAI, have now taken the voluntary commitments.

Eight tech companies, including Salesforce and Nvidia, are signing on to the White House’s voluntary artificial intelligence pledge, joining a roster of prominent firms that have agreed to mitigate the risks of AI, as Washington policymakers continue to debate new regulation of the emerging technology.

Fifteen of the most influential companies in the United States have now taken the commitments, which include a promise to develop technology to identify AI-generated images and a vow to share data about safety with the government and academics.

New participants include IBM, … More.


The commitments will apply to the next system released by each of the companies, the administration official said.

Dana Rao, Adobe’s executive vice president and general counsel, called the commitments an “important step” in collaboration between the government and industry.