Toggle light / dark theme

AI For Defense Nuclear Nonproliferation — Angela Sheffield, Senior Program Manager, National Nuclear Security Administration, U.S. Department of Energy.


Angela Sheffield is a graduate student and Space Industry fellow at the National Defense University’s Eisenhower School. She is on detail from the National Nuclear Security Administration (NNSA), where she serves as the Senior Program Manager for AI for Defense Nuclear Nonproliferation Research and Development.

The National Nuclear Security Administration (https://www.energy.gov/nnsa/national-nuclear-security-administration), a United States federal agency, part of the U.S. Dept of Energy and it’s Office of Defense Nuclear Non-Proliferation, responsible for safeguarding national security through the military application of nuclear science.

Marketing and the need for data rules

Legislators and decision-makers worldwide have also been active in regulating data although it’s almost impossible to keep pace with change in many places. The genuine exploitation of data requires rules and regulations, as growth always increases the potential for misuse. The task of technology companies is to build data pipelines that ensure the trust and security of AI and analytics.

Data is the new currency for businesses, and the overwhelming growth rate of it can be intimidating. The key challenge is to harness data in a way that benefits both marketers and consumers who produce it. And in doing this, manage the “big data” in an ethically correct and consumer-friendly way. Luckily, there are many great services for analyzing data, effective regulation to protect consumers’ rights and a never-ending supply of information at our hands to make better products and services. The key for businesses is to embrace these technologies so that they can avoid sinking in their own data.

**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development. will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Virusight Diagnostic, an Israeli company that combines artificial intelligence software and spectral technology announced the results of a study that found that its Pathogens Diagnostic device detects COVID-19 with 96.3 percent accuracy in comparison to the common RT-PCR.

The study was conducted by researchers from the Department of Science and Technology, University of Sannio, Benevento, Italy with partner company TechnoGenetics S.p. A.


The Virusight solution was tested on 550 saliva samples and found to be safe and effective.

Recent technological advances, such as the development of increasingly sophisticated machine learning algorithms and robots, have sparked much debate about artificial intelligence (AI) and artificial consciousness. While many of the tools created to date have achieved remarkable results, there have been many discussions about what differentiates them from humans.

More specifically, computer scientists and neuroscientists have been pondering on the difference between and “consciousness,” wondering whether machines will ever be able to attain the latter. Amar Singh, Assistant Professor at Banaras Hindu University, recently published a paper in a special issue of Springer Link’s AI & Society that explores these concepts by drawing parallels with the fantasy film “Being John Malkovich.”

“Being John Malkovich” is a 1999 film directed by Spike Jonze and featuring John Cusack, Cameron Diaz, and other famous Hollywood stars. The film tells the story of a puppeteer who discovers a portal through which he can access the mind of the movie star John Malkovich, while also altering his being.