Toggle light / dark theme

Photoroom announced Tuesday that it has raised $43 million in Series B funding at a valuation of $500 million. London-based early-stage venture firm Balderton Capital and Aglaé Ventures, an investment firm backed by LVMH CEO Bernard Arnault and his family, led the round, with participation from Y Combinator. The new round brings the Photoroom’s total funding to $64 million. With more than 150 million app downloads and a subscription-based business model, the Paris-based startup has crossed $50 million in annual recurring revenue, according to Rouif.

Photoroom has also garnered the attention of brands like Netflix, Lionsgate and Warner Bros, who have used the startup’s API to promote films and shows including Barbie and Black Mirror. In October 2023, Photoroom partnered with Universal Music Group-owned record label, Republic Records, to create a custom selfie generator of Taylor Swift’s album 1989 that millions of fans used to create an album cover with their own faces.

Photoroom first gained traction in 2020, the same year it was accepted into Y Combinator. During the pandemic, entrepreneurs rushed to produce online catalogs of their products and without access to photographers and professional photo studios, they turned to photo editing tools like Photoroom. Before generative AI tools became mainstream, the startup’s most popular tools were a background remover tool, a tool called “magic retouch,” which removed unwanted objects from a photo as well as a feature that could blur backgrounds in two seconds. When more advanced AI tools became available in 2023, the startup expanded its offerings to include fully AI-generated backgrounds, where users could create background visuals from scratch through text prompts — now Photoroom’s most commonly used feature.

The tone and tuning of musical instruments has the power to manipulate our appreciation of harmony, new research shows. The findings challenge centuries of Western music theory and encourage greater experimentation with instruments from different cultures.

According to the Ancient Greek philosopher Pythagoras, ‘consonance’—a pleasant-sounding combination of notes—is produced by special relationships between simple numbers such as 3 and 4. More recently, scholars have tried to find psychological explanations, but these ‘integer ratios’ are still credited with making a chord sound beautiful, and deviation from them is thought to make music ‘dissonant,’ unpleasant sounding.

But researchers from the University of Cambridge, Princeton and the Max Planck Institute for Empirical Aesthetics, have now discovered two key ways in which Pythagoras was wrong.

While Pika’s AI generated videos remain arguably lower quality and less “realistic” than the ones shown off by OpenAI’s Sora or even another rival AI video generation startup, Runway, the addition of the new Lip Sync feature puts it ahead of both in offering capabilities disruptive to traditional filmmaking software.

With Lip Sync, Pika is addressing one of the last remaining barriers to AI being useful for creating longer narrative films. Most other leading AI video generators don’t yet currently offer a similar feature natively.

Instead, in order to add spoken dialog and matching lip movements to characters inside the AI video, users have had to make do with third party tools and cumbersome additions in post production, which give the resulting video of a “low budget,” Monty Python-esque quality.

And other low pressure rail & maglev systems offer the possibility of ultra-fast and ultra-cheap transport on and off of Earth. Watch my exclusive video Topopolis: The Eternal River: https://nebula.tv/videos/isaacarthur–… Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur Visit our Website: http://www.isaacarthur.net Join Nebula: https://go.nebula.tv/isaacarthur Support us on Patreon: / isaacarthur Support us on Subscribestar: https://www.subscribestar.com/isaac-a… Facebook Group: / 1,583,992,725,237,264 Reddit: / isaacarthur Twitter: / isaac_a_arthur on Twitter and RT our future content. SFIA Discord Server: / discord Credits: Vacuum Trains Episode 435a; February 25, 2024 Written, Produced & Narrated by: Isaac Arthur Editors: Konstantin Sokerin Merv Johnson II Graphics: Allen DeMoura Apogii.uk Ian “LITE” Long Jarred Eagley Justin Dixon Katie Byrne Ken York Phil Swan Real Courte Sergio Botero Udo Schroeter Music Courtesy of: Epidemic Sound http://epidemicsound.com/creator

So called Dyson megaphere.


Birch Planets are enormous hypothetical Megastructures which would have more living area than every planet in our galaxy combined, and are even larger than Dyson Spheres. Mega Earths: https://youtu.be/ioKidcpkZN0 To find out more about megastructures, see the Megastructure Compendium: https://youtu.be/1xt13dn74wc Produced, Written & Narrated by: Isaac Arthur Graphics by: Jeremy Jozwik, Ken York Music Courtesy of Epidemic Sound http://epidemicsound.com/creator

Google Pay, the digital payment app for desktop, mobile apps, and in stores, was pretty much phased out by the introduction of Google Wallet in 2022. Google Wallet, which is a mobile app for Android users, is used five times more than Google Pay, according to the announcement. Since Wallet can also house credit cards for tap-to-pay, as well as digital IDs, and public transit passes, it’s proven to be the more useful alternative.

It’s somewhat typical for Google to launch products only to shut them down or roll them into other products after a few years due to lack of demand or commercial interest. The Google graveyard includes Jamboard, its cloud gaming service Stadia, and Google Play Music. So this is just one of many Google products to bite the dust. But Google Pay users won’t be left stranded.

If you’re a Google Pay user, you can still use the U.S. version of the app until June 4. But you can still transfer funds from your account into your bank account through the Google Pay website after June 4. After that, Google Pay users will no longer be able to send, request, or transfer money through the app.

Summary: Researchers developed an innovated a technique to convert complex neuroimaging data into audiovisual formats. By transforming brain activity and blood flow data from behaviors like running or grooming in mice into synchronized piano and violin sounds, accompanied by video, they offer an intuitive approach to explore the brain’s intricate workings.

This method not only makes it easier to identify patterns in large datasets but also enhances the understanding of the dynamic relationship between neuronal activity and behavior. The toolkit represents a significant step forward in neuroscientific research, enabling scientists to intuitively screen and interpret vast amounts of brain data.

Summary: Researchers unlocked how the brain processes melodies, creating a detailed map of auditory cortex activity. Their study reveals that the brain engages in dual tasks when hearing music: tracking pitch with neurons used for speech and predicting future notes with music-specific neurons.

This breakthrough clarifies the longstanding mystery of melody perception, demonstrating that some neural processes for music and speech are shared, while others are uniquely musical. The discovery enhances our understanding of the brain’s complex response to music and opens avenues for exploring music’s emotional and therapeutic impacts.

A new study by researchers at UC San Francisco provides new insight into how the brain processes musical melodies. Through precise mapping of the cerebral cortex, the study uncovered that our brains process music by not only discerning pitch and the direction of pitch changes but also by predicting the sequence of upcoming notes, each task managed by distinct sets of neurons. The findings have been published in Science Advances.

Previous research had established that our brains possess specialized mechanisms for processing speech sounds, particularly in recognizing pitch changes that convey meaning and emotion. The researchers hypothesized that a similar, perhaps specialized, set of neurons might exist for music, dedicated to predicting the sequence of notes in a melody, akin to how certain neurons predict speech sounds.

“Music is both uniquely human and universally human. Studying the neuroscience of music can therefore reveal something fundamental about what it means to be human,” said lead author Narayan Sankaran, a postdoctoral fellow in the Kavli Center for Ethics, Science, and the Public at UC Berkeley, who conducted the study while a researcher in the lab of UCSF’s Edward Chang.