Toggle light / dark theme

SpaceX continued expanding the Starlink constellation’s direct-to-cell capabilities by sending another 13 satellites plus seven others into orbit aboard a Falcon 9 rocket launched early Saturday from Vandenberg Space Force Base.

The 20 new Starlink satellites, including a baker’s dozen with direct-to-cell capabilities, lifted off at 5:58 a.m. Saturday as a stubborn marine layer interfered with those hoping to see it.

After completing its tasks, the first-stage booster, making its 21st flight, landed on the Of Course I Still Love You droneship positioned in the Pacific Ocean hundreds of miles south of the base. Saturday’s mission marked the 301st successful Falcon landing.

I am truly honored and humbled to share this interview with Elon Musk, recorded the same day of the epic and historic launch of Starship IFT-4.
#starship #spacex #elonmusk.

Hi! I am now FULL TIME Ellie in SPACE!
My channel started as a way to keep people up to date on the world of SpaceX’s Starlink, the satellite internet service. The channel has grown to include the broader Elon Musk universe.

Your support for my channel means a lot. Thanks for watching and if you have any video ideas, shoot me an email, [email protected].

Find me on instagram, @elianainspace.

JULIEN CROCKETT: Let’s start with the tension at the heart of AI: we understand and talk about AI systems as if they are both mere tools and intelligent actors that might one day come alive. Alison, you’ve argued that the currently popular AI systems, LLMs, are neither intelligent nor dumb—that those are the wrong categories by which to understand them. Rather, we should think of them as cultural technologies, like the printing press or the internet. Why is a “cultural technology” a better framework for understanding LLMs?

Awkward.


“The prevalence and harms of online misinformation is a perennial concern for internet platforms, institutions and society at large,” reads the paper. “The rise of generative AI-based tools, which provide widely-accessible methods for synthesizing realistic audio, images, video and human-like text, have amplified these concerns.”

The study, first caught by former Googler Alexios Mantzarlis and flagged in the newsletter Faked Up, focused on media-based misinformation, or bad information propagated through visual mediums like images and videos. To narrow the scope of the research, the study focused on media that was fact-checked by the service ClaimReview, ultimately examining a total of 135,838 fact-check-tagged pieces of online media.

As the researchers write in the paper, AI is effective for producing realistic synthetic content quickly and easily, at “a scale previously impossible without an enormous amount of manual labor.” The availability of AI tools, per the researchers’ findings, has led to hockey stick-like growth in AI-generated media online since 2023. Meanwhile, other types of content manipulation decreased in popularity, though “the rise” of AI media “did not produce a bump in the overall proportion” of image-dependant misinformation claims.

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.