Toggle light / dark theme

With 17.3 million adult Americans affected, depression is one of the most prevalent mental disorders in the country. A gloomy or depressed mood that lasts for two weeks or more is considered major depression.

Depression is distinct from common mood swings and brief emotional reactions to problems in daily life. Depression may develop into a serious medical condition, particularly if it is recurring and of moderate to severe intensity. The afflicted individual may experience severe suffering and perform badly at work, in school, and with family. In the worst cases, depression might result in suicide.

Since its introduction in the late 1980s to prevent heart attack and stroke, statins have been hailed as a wonder drug and prescribed to tens of millions of individuals. However, some research has suggested that the medications may still have other benefits, particularly those for mental health. A recent study investigates the impact of statins on the emotional bias, a risk factor for depression. The study appears in Biological Psychiatry and was published by Elsevier.

So how can LaMDA provide responses that might be perceived by a human user as conscious thought or introspection? Ironically, this is due to the corpus of training data used to train LaMDA and the associativity between potential human questions and possible machine responses. It all boils down to probabilities. The question is how those probabilities evolve such that a rational human interrogator can be confused as to the functionality of the machine?

This brings us to the need for improved “explainability” in AI. Complex artificial neural networks, the basis for a variety of useful AI systems, are capable of computing functions that are beyond the capabilities of a human being. In many cases, the neural network incorporates learning functions that enable adaptation to tasks outside the initial application for which the network was developed. However, the reasons why a neural network provides a specific output in response to a given input are often unclear, even indiscernible, leading to criticism of human dependence upon machines whose intrinsic logic is not properly understood. The size and scope of training data also introduce bias to the complex AI systems, yielding unexpected, erroneous, or confusing outputs to real-world input data. This has come to be referred to as the “black box” problem where a human user, or the AI developer, cannot determine why the AI system behaves as it does.

The case of LaMDA’s perceived consciousness appears no different from the case of Tay’s learned racism. Without sufficient scrutiny and understanding of how AI systems are trained, and without sufficient knowledge of why AI systems generate their outputs from the provided input data, it is possible for even an expert user to be uncertain as to why a machine responds as it does. Unless the need for an explanation of AI behavior is embedded throughout the design, development, testing, and deployment of the systems we will depend upon tomorrow, we will continue to be deceived by our inventions, like the blind interrogator in Turing’s game of deception.

Los Alamos National Lab


In early June 1972, the world’s most intense proton beam was delivered through nearly a mile of vacuum tanks at the Los Alamos Neutron Science Center, or LANSCE. As the facility has evolved over five decades, that proton beam is now delivered to five state-of-the-art experimental areas, including the Isotope Production Facility.

The Isotope Production Facility excels in the basic science and applied engineering needed to produce and purify useful isotopes that can then be produced at scale in the marketplace. In the fight against cancer, recent and current clinical trials are yielding promising results with the short-lived isotope actinium-225, which delivers high-energy radiation to a cancer tumor without greatly affecting the surrounding tissue.

Incorporating established physics into neural network algorithms helps them to uncover new insights into material properties

According to researchers at Duke University, incorporating known physics into machine learning algorithms can help the enigmatic black boxes attain new levels of transparency and insight into the characteristics of materials.

Researchers used a sophisticated machine learning algorithm in one of the first efforts of its type to identify the characteristics of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.

Elon Musk is finally revealing some specifics of his Twitter content moderation policy. Assuming he completes the buyout he initiated at $44 billion in April, it seems the tech billionaire and Tesla CEO is open to a “hands-on” approach — something many didn’t expect, according to an initial report from The Verge.

This came in reply to an employee-submitted question regarding Musk’s intentions for content moderation, where Musk said he thinks users should be allowed to “say pretty outrageous things within the law”, during an all-hands meeting he had with Twitter’s staff on Thursday.

Elon Musk views Twitter as a platform for ‘self-expression’

This exemplifies a distinction initially popularized by Renée DiResta, a disinformation authority — according to the report. But, during the meeting, Musk said he wants Twitter to impose a stricter standard against bots and spam, adding that “it needs to be much more expensive to have a troll army.”