Toggle light / dark theme

Another good use for AI. Fighting disinformation.


About 60% of adults in the US who get their news through social media have, largely unknowingly, shared false information, according to a poll by the Pew Research Center. The ease at which disinformation is spread and the severity of consequences it brings — from election hacking to character assassination — make it an issue of grave concern for us all.

One of the best ways to combat the spread of fake news on the internet is to understand where the false information was started and how it was disseminated. And that’s exactly what Camille Francois, the chief innovation officer at Graphika, is doing. She’s dedicated to bringing to light disinformation campaigns before they take hold.

Francois and her team are employing machine learning to map out online communities and better understand how information flows through networks. It’s a bold and necessary crusade as troll farms, deep fakes, and false information bombard the typical internet user every single day.

Francois says, this work is two parts technology, one part sociology. The techniques are always evolving, and we have to stay one step ahead.” We sit down with Francois for an in-depth discussion on how the tech works and what it means for the dissemination of information across the internet.

Last week was an important moment in the debate about AI, with President Biden issuing an executive order and the UK’s AI Safety Summit convening world leaders.

Much of the buzz around these events made it sound like AI presents us with a binary choice: unbridled optimism, or existential fear. But there was also a third path available, a nuanced, practical perspective that examines the real risks and benefits of AI.

There have been people promoting this third perspective for years — although GPT-fueled headlines of the past 12 months have often looked past them. They are foundations, think tanks, researchers and activists (including a number of Mozilla fellows and founders) plus the policymakers behind efforts like last year’s AI Blueprint for an AII Bill of Rights.

Scientists at Oak Ridge National Laboratory have used their expertise in quantum biology, artificial intelligence and bioengineering to improve how CRISPR Cas9 genome editing tools work on organisms like microbes that can be modified to produce renewable fuels and chemicals.

CRISPR is a powerful tool for bioengineering, used to modify to improve an organism’s performance or to correct mutations. The CRISPR Cas9 tool relies on a single, unique guide RNA that directs the Cas9 enzyme to bind with and cleave the corresponding targeted site in the genome.

Existing models to computationally predict effective guide RNAs for CRISPR tools were built on data from only a few model species, with weak, inconsistent efficiency when applied to microbes.

It’s a stormy holiday weekend, and you’ve just received the last notification you want in the busiest travel week of the year: the first leg of your flight is significantly delayed.

You might expect this means you’ll be sitting on hold with airline customer service for half an hour. But this time, the process looks a little different: You have a brief text exchange with the airline’s AI chatbot, which quickly assesses your situation and places you in a priority queue. Shortly after, a human agent takes over, confirms the details, and gets you rebooked on an earlier flight so you can make your connection. You’ll be home in time to enjoy mom’s pot roast.

Artificial intelligence already wears multiple hats in the workplace, whether its writing ad copy, handling customer support requests, or filtering job applications. As the technology continues its ascent and capabilities, the notion of corporations managed or owned by AI becomes less far-fetched. The legal framework already exists to allow “Zero-member LLCs.”

How would an AI-operated LLC be treated under the law, and how would AI respond to or consequences as the owner/manager of an LLC? These questions speak to an unprecedented challenge facing lawmakers: the regulation of a nonhuman entity with the same (or better) cognitive capabilities as humans that, if left unresolved or poorly addressed, could slip beyond human control.

“Artificial intelligence and interspecific law,” an article by Daniel Gervais of Vanderbilt Law School and John Nay of The Center for Legal Informatics at Stanford University, and a Visiting Scholar at Vanderbilt, argues for more AI research on the legal compliance of nonhumans with human-level intellectual capacity.

Data assimilation is the combination of the latest observations with a short-range forecast to obtain the best possible estimate of the current state of the Earth system. Machine learning can contribute to it by optimizing the use of satellite observations.

Obtaining the best possible estimate of the current state of the Earth system, known as the analysis, is extremely important for weather forecasting. That’s because the analysis serves as the initial conditions of forecasts.

To use the latest observations, we rely on a vast system of instruments that regularly measure aspects of the atmosphere and other components of the Earth system. Over the last few decades, observations have become increasingly important.

Machine learning (ML) is now mission critical in every industry. Business leaders are urging their technical teams to accelerate ML adoption across the enterprise to fuel innovation and long-term growth. But there is a disconnect between business leaders’ expectations for wide-scale ML deployment and the reality of what engineers and data scientists can actually build and deliver on time and at scale.

In a Forrester study launched today and commissioned by Capital One, the majority of business leaders expressed excitement at deploying ML across the enterprise, but data scientist team members said they didn’t yet have all the necessary tools to develop ML solutions at scale. Business leaders would love to leverage ML as a plug-and-play opportunity: “just input data into a black box and valuable learnings emerge.” The engineers who wrangle company data to build ML models know it’s far more complex than that. Data may be unstructured or poor quality, and there are compliance, regulatory, and security parameters to meet.

The rise in capabilities of artificial intelligence (AI) systems has led to the view that these systems might soon be conscious. However, we might be underestimating the neurobiological mechanisms underlying human consciousness.

Modern AI systems are capable of many amazing behaviors. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. When we, humans, are interacting with ChatGPT, we consciously perceive the text the model generates, just as you are currently consciously perceiving this text here.

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.