Toggle light / dark theme

Bacteria naturally present in the human intestine (known as the gut microbiota) can transform cholesterol-derived bile acids into powerful metabolites that strengthen anti-cancer immunity by blocking androgen signaling, according to a preclinical study led by Weill Cornell Medicine investigators. The study was published on April 15 in Cell.

“I was very surprised by our findings. As far as I know, no one has previously discovered molecules like these bile acids that can interact with the androgen receptor in this way,” said co-senior author Dr. Chun-Jun Guo, an associate professor of immunology in medicine in the Division of Gastroenterology and Hepatology and a scientist at the Jill Roberts Institute for Research in Inflammatory Bowel Disease at Weill Cornell Medicine.

Dr. David Artis, director of the Jill Roberts Institute and the Friedman Center for Nutrition and Inflammation and the Michael Kors Professor in Immunology, and Dr. Nicholas Collins, assistant professor of immunology in medicine, both at Weill Cornell Medicine, are co-senior authors of the study. Drs. Wen-Bing Jin, formerly a postdoctoral associate, and Leyi Xiao, a current postdoctoral associate in Dr. Guo’s lab, are the co-first authors of the study.

The prefrontal cortex is critical for working memory, over a timescale of seconds. In this Review, Miller and Constantinidis examine how the prefrontal cortex facilitates the integration of memory systems across other timescales as well. In this framework of prefrontal learning, short-term memory and long-term memory interact to serve goal-directed behaviour.

In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter. Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero. The test depends on human minds to assess whether there is more there than a thin facade. But what exactly is being assessed?

I just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground. Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.

But do these systems have any understanding of the actual subject matter they’re dealing with? Or do they have what Daniel Dennett calls “competence without comprehension”?