Toggle light / dark theme

Meta sells GIPHY to Shutterstock at a loss of $347 million

Meta had to sell GIPHY after UK regulator blocked the deal last year.

Shutterstock announced Tuesday that it will buy animated-image platform GIPHY from Meta for $53 million in cash. The deal is a significant loss for Meta, which had reportedly paid around $400 million to acquire the New York-based GIF search engine in 2020.

This development comes a year after the deal was challenged by the UK’s Competition and Markets Authority, which had ordered Meta to sell Giphy over anti-competitive practices.


Meta has taken a more than $300 million loss on Giphy – selling off the gif search engine to the stock image service Shutterstock for $53m after the deal was blocked by UK regulators.

AI tool generates video from brain activity

“Alexa, play back that dream I had about Kirsten last week.” That’s a command that may not be too far off in the future, as researchers close in on technology that can tap into our minds and retrieve the imagery of our thoughts.

Researchers at the National University of Singapore and the Chinese University of Hong Kong reported last week that they have developed a process capable of generating video from . The research is published on the arXiv preprint server.

Using a process called imaging (fMRI), researchers Jiaxin Qing, Zijiao Chen and Juan Helen Zhou coupled data retrieved through imaging with the deep learning model Stable Diffusion to create smooth, high quality videos.

I’m shocked! Shocked, I tell you

Oh hey, AI enthusiasts and futurism fans! I’d love to share with you an article I recently wrote on my Substack. It takes you on a journey from the ancient Greek device known as the Antikythera mechanism, all the way to the generative AI explosion of 2023, tracing the history of computation and AI.

For more than a decade, I’ve been writing about technology, society, and the future, aiming to provide thoughtful analysis and critical thinking on the latest trends and their implications. I’ve been following these topics for over 15 years, and I am enthusiastic about initiating a meaningful conversation with you about the changing world and its intersection with technology.


Well, not that shocked.

The urgent risks of runaway AI — and what to do about them

Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we’re building reliable systems (or misinformation machines), explores the failures of today’s AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)

Can Machines Be Self-Aware? New Research Explains How This Could Happen

In a sequence of papers accepted for the 16th Annual Conference in Artificial General Intelligence in Stockholm, I pose a mechanistic explanation for these phenomena. They explain how we may build a machine that’s aware of itself, of others, of itself as perceived by others, and so on.

Intelligence and Intent

A lot of what we call intelligence boils down to making predictions about the world with incomplete information. The less information a machine needs to make accurate predictions, the more “intelligent” it is.

These small startups are making headway on A.I.‘s biggest challenges

While much of what Aligned AI is doing is proprietary, Gorman says that at its core Aligned AI is working on how to give generative A.I. systems a much more robust understanding of concepts, an area where these systems continue to lag humans, often by a significant margin. “In some ways [large language models] do seem to have a lot of things that seem like human concepts, but they are also very fragile,” Gorman says. “So it’s very easy, whenever someone brings out a new chatbot, to trick it into doing things it’s not supposed to do.” Gorman says that Aligned AI’s intuition is that methods that make chatbots less likely to generate toxic content will also be helpful in making sure that future A.I. systems don’t harm people in other ways. The work on “the alignment problem”—which is the idea of how we align A.I. with human values so it doesn’t kill us all and from which Aligned AI takes its name—could also help address dangers from A.I. that are here today, such as chatbots that produce toxic content, is controversial. Many A.I. ethicists see talk of “the alignment problem,” which is what people who say they work on “A.I. Safety” often say is their focus, as a distraction from the important work of addressing present dangers from A.I.

But Aligned AI’s work is a good demonstration of how the same research methods can help address both risks. Giving A.I. systems a more robust conceptual understanding is something we all should want. A system that understands the concept of racism or self-harm can be better trained not to generate toxic dialogue; a system that understands the concept of avoiding harm and the value of human life, would hopefully be less likely to kill everyone on the planet.

Aligned AI and Xayn are also good examples that there are a lot of promising ideas being produced by smaller companies in the A.I. ecosystem. OpenAI, Microsoft, and Google, while clearly the biggest players in the space, may not have the best technology for every use case.