Toggle light / dark theme

Identifying new sources that produce electrons faster could help to advance the many imaging techniques that rely on electrons. In a recent paper published in Physical Review Letters, a team of researchers at Eindhoven University of Technology demonstrated the scattering of subpicosecond electron bunches from an ultracold electron source.

“Our research group is working to develop the next generation of ultrafast electron sources to push imaging techniques such as ultrafast electron diffraction to the next level,” Tim de Raadt, one of the researchers who carried out the study, told Phys.org.

“The idea of using laser-cooled ultracold gas clouds as an electron source to improve the state-of-the-art in brightness was first introduced in a paper published in 2005. Since then, research efforts have produced multiple versions of such a ultracold electron source, with the most recent one (used in this work) focusing on making the source compact, easy to align and operate, and being more stable, as described in another past paper that also studied the transverse electron beam properties.”

A new supernova has turned into the most watched phenomenon in the May night sky. The close proximity of the stellar explosion and the vast amount of observations gathered since the discovery promise to advance astronomers’ understanding of stellar evolution and could even lead to major advances in supernova forecasting.

Supernovas are powerful explosions in which very massive stars, at least eight times more massive than our sun, die when they use up all the hydrogen fuel in their cores. The discovery of this latest exploding star, known officially as 2023ifx, was a serendipitous one.

Program & apply to join: https://foresight.org/existential-hope/
Foresight Existential Hope Group.

Kevin kelly, wired magazine | pioneering visions of a high-tech future.

In this episode of Foresight’s Existential Hope Podcast, our special guest is Kevin Kelly, an influential figure in technology, culture, and optimism for the future. As the founding executive editor of Wired and former editor of Whole Earth Review, Kelly’s ideas and perspectives have shaped generations of thinkers and technologists.

Join our hosts Allison Duettmann and Beatrice Erkers as they delve into Kelly’s philosophies and experiences, from witnessing technological shifts over the decades to fostering optimism about the future. Kelly shares details about his latest book, a collection of optimistic advice in tweet form, and talks about his current project envisioning a desirable hi-tech future 100 years from now.

He also discusses the transformative power of the internet as an accelerant for learning, the underestimated long-term effects of being online, and the culture-changing potential of platforms like YouTube. If you’re interested in the intersection of technology, optimism, and the future, this is a must-see.

Join us:

Meta had to sell GIPHY after UK regulator blocked the deal last year.

Shutterstock announced Tuesday that it will buy animated-image platform GIPHY from Meta for $53 million in cash. The deal is a significant loss for Meta, which had reportedly paid around $400 million to acquire the New York-based GIF search engine in 2020.

This development comes a year after the deal was challenged by the UK’s Competition and Markets Authority, which had ordered Meta to sell Giphy over anti-competitive practices.


Meta has taken a more than $300 million loss on Giphy – selling off the gif search engine to the stock image service Shutterstock for $53m after the deal was blocked by UK regulators.

“Alexa, play back that dream I had about Kirsten last week.” That’s a command that may not be too far off in the future, as researchers close in on technology that can tap into our minds and retrieve the imagery of our thoughts.

Researchers at the National University of Singapore and the Chinese University of Hong Kong reported last week that they have developed a process capable of generating video from . The research is published on the arXiv preprint server.

Using a process called imaging (fMRI), researchers Jiaxin Qing, Zijiao Chen and Juan Helen Zhou coupled data retrieved through imaging with the deep learning model Stable Diffusion to create smooth, high quality videos.

Oh hey, AI enthusiasts and futurism fans! I’d love to share with you an article I recently wrote on my Substack. It takes you on a journey from the ancient Greek device known as the Antikythera mechanism, all the way to the generative AI explosion of 2023, tracing the history of computation and AI.

For more than a decade, I’ve been writing about technology, society, and the future, aiming to provide thoughtful analysis and critical thinking on the latest trends and their implications. I’ve been following these topics for over 15 years, and I am enthusiastic about initiating a meaningful conversation with you about the changing world and its intersection with technology.


Well, not that shocked.