Toggle light / dark theme

JULIEN CROCKETT: Let’s start with the tension at the heart of AI: we understand and talk about AI systems as if they are both mere tools and intelligent actors that might one day come alive. Alison, you’ve argued that the currently popular AI systems, LLMs, are neither intelligent nor dumb—that those are the wrong categories by which to understand them. Rather, we should think of them as cultural technologies, like the printing press or the internet. Why is a “cultural technology” a better framework for understanding LLMs?

Awkward.


“The prevalence and harms of online misinformation is a perennial concern for internet platforms, institutions and society at large,” reads the paper. “The rise of generative AI-based tools, which provide widely-accessible methods for synthesizing realistic audio, images, video and human-like text, have amplified these concerns.”

The study, first caught by former Googler Alexios Mantzarlis and flagged in the newsletter Faked Up, focused on media-based misinformation, or bad information propagated through visual mediums like images and videos. To narrow the scope of the research, the study focused on media that was fact-checked by the service ClaimReview, ultimately examining a total of 135,838 fact-check-tagged pieces of online media.

As the researchers write in the paper, AI is effective for producing realistic synthetic content quickly and easily, at “a scale previously impossible without an enormous amount of manual labor.” The availability of AI tools, per the researchers’ findings, has led to hockey stick-like growth in AI-generated media online since 2023. Meanwhile, other types of content manipulation decreased in popularity, though “the rise” of AI media “did not produce a bump in the overall proportion” of image-dependant misinformation claims.

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

The delicate nature of quantum information means it does not travel well. A quantum Internet therefore needs devices known as quantum repeaters to swap entanglement between quantum bits, or qubits, at intermediate points. Several researchers have taken steps towards this goal by distributing entanglement between multiple nodes.

In 2020, for example, Xiao-Hui Bao and colleagues in Jian-Wei Pan’s group at the University of Science and Technology of China (USTC) entangled two ensembles of rubidium-87 atoms in vapour cells using photons that had passed down 50 km of commercial optical fibre. Creating a functional quantum repeater is more complex, however: “A lot of these works that talk about distribution over 50,100 or 200 kilometres are just talking about sending out entangled photons, not about interfacing with a fully quantum network at the other side,” explains Can Knaut, a PhD student at Harvard University and a member of the US team.

One of the main barriers involves how to connect objects to the internet in places where there is no mobile network infrastructure. The answer seems to lie with low Earth orbit (LEO) satellites, although the solution presents its own challenges.

A new study led by Guillem Boquet and Borja Martínez, two researchers from the Universitat Oberta de Catalunya (UOC) working in the Wireless Networks (WINE) group of the university’s Internet Interdisciplinary Institute (IN3), has examined possible ways to improve the coordination between the billions of connected objects on the surface of the Earth and the satellites in its atmosphere.

The paper is published in the IEEE Internet of Things Journal.

While silicon has been the go-to material for sensor applications, could polymer be used as a suitable substitute since silicon has always lacked flexibility to be used in specific applications? This is what a recent grant from the National Science Foundation hopes to address, as Dr. Elsa Reichmanis of Lehigh University was recently awarded $550,000 to investigate how polymers could potentially be used as semiconductors for sensor applications, including Internet of Things, healthcare, and environmental applications.

Illustration of an organic electrochemical transistor that could be developed as a result of this research. (Credit: Illustration by by Ella Marushchenko; Courtesy of Reichmanis Research Group)

“We’ll be creating the polymers that could be the building blocks of future sensors,” said Dr. Reichmanis, who is an Anderson Chair in Chemical Engineering in the Department of Chemical and Biomolecular Engineering at Lehigh University. “The systems we’re looking at have the ability to interact with ions and transport ionic charges, and in the right environment, conduct electronic charges.”