Toggle light / dark theme

WASHINGTON, Feb 17 (Reuters) — U.S. diplomatic communications with China remain open after the shooting down of a Chinese spy balloon this month, but contact between the countries’ militaries “unfortunately” remains shut down, the White House said on Friday.

White House National Security Council spokesman John Kirby also said it was not the “right time” for Secretary of State Antony Blinken to travel to China after he postponed a Feb. 5–6 trip over the balloon episode, but President Joe Biden wanted to speak to Chinese President Xi Jinping when it was “appropriate.”

Kirby told a White House news briefing that U.S. and Chinese diplomats can still communicate despite tensions over the balloon incident.

A team of computer programmers at IT University of Copenhagen has developed a new way to encode and generate Super Mario Bros. levels—called MarioGPT, the new approach is based on the language model GPT-2. The group outlines their work and the means by which others can use their system in a paper on the arXiv pre-print server.

Mario Brothers is a first introduced in 1983—it involves two Italian plumbers emerging from a sewer and attempting to rescue Princess Peach, who has been captured and held by Bowser. To rescue her, the brothers must travel (via input from the game player) across a series of obstacles made of pipes and bricks. As they travel, the terrain changes in accordance with the level they have achieved in the game. In this new effort, the team in Denmark has recreated one aspect of the game—the number of levels that can be traversed.

The researchers used Generative Pre-trained Transformer 2 (GPT-2)—an open-source language created by a team at OpenAI, to translate user requests into graphical representations of Super Mario Brothers game levels. To do so, they created a small bit of Python code to help the language model understand what needed to be done and then trained it using samples from the original Super Mario Bros. game and one of its sequels, “Super Mario Bros.: The Lost Levels.”

Inside the mall, customers’ avatars will find Carrefour, VOX Cinemas, THAT Concept Store, Ghawali and Samsung Store, “with many more brands and exciting features in the pipeline”.

Announced at the World Government Summit, the mall is in the initial phase of development as the group looks “closely” at customers’ needs and expectations.

Khalifa bin Braik, CEO of Majid Al Futtaim Asset Management, said the Mall of the Metaverse will be a leading retail and entertainment destination — “and surely a huge attraction for customers who crave digital experiences from their most loved brands”.

Summary: Tracking hippocampal neurons in mice as they watched a movie revealed novel ways to improve artificial intelligence and track neurological disorders associated with memory and learning deficits.

Source: UCLA

Even the legendary filmmaker Orson Welles couldn’t have imagined such a plot twist.

Science fiction films love to show off huge leaps in technology. The latest Avatar movie features autonomous, spider-like robots that can build a whole city within weeks. There are space ships that can carry frozen passengers lightyears away from Earth. In James Cameron’s imagination, we can download our memories and then upload them into newly baked bodies. All this wildly advanced tech is controlled through touch-activated, transparent, monochrome and often blue holograms. Just like a thousand other futuristic interfaces in Hollywood.

When we are shown a glimpse of the far future through science fiction films, there are omnipresent voice assistants, otherworldly wearables, and a whole lot of holograms. For whatever reason these holograms are almost always blue, floating above desks and visible to anyone who might stroll by. This formula for futuristic UI has always baffled me, because as cool as it looks, it doesn’t seem super practical. And yet, Hollywood seems to have an obsession with imagining future worlds washed in blue light.

Perhaps the Hollywood formula is inspired by one of the first holograms to grace the silver screen: Princess Leia telling Obi-Wan Kenobi that he is their only hope. Star Wars served as an inspiration for future sci-fi ventures, so it follows that other stories might emulate the original. The Avatar films have an obvious penchant for the color blue, and so the holograms that introduce us to the world of Pandora and the native Na’vi are, like Leia, made out of blue light.

The current media environment is filled with visual effects and video editing. As a result, as video-centric platforms have gained popularity, demand for more user-friendly and effective video editing tools has skyrocketed. However, because video data is temporal, editing in the format is still difficult and time-consuming. Modern machine learning models have shown considerable promise in enhancing editing, although techniques frequently compromise spatial detail and temporal consistency. The emergence of potent diffusion models trained on huge datasets recently caused a sharp increase in the quality and popularity of generative techniques for picture synthesis. Simple users may produce detailed pictures using text-conditioned models like DALL-E 2 and Stable Diffusion with only a text prompt as input. Latent diffusion models effectively synthesize pictures in a perceptually constrained environment. They research generative models suitable for interactive applications in video editing due to the development of diffusion models in picture synthesis. Current techniques either propagate adjustments using methodologies that calculate direct correspondences or, by finetuning on each unique video, re-pose existing picture models.

They try to avoid costly per-movie training and correspondence calculations for quick inference for every video. They suggest a content-aware video diffusion model with a configurable structure trained on a sizable dataset of paired text-image data and uncaptioned movies. They use monocular depth estimations to represent structure and pre-trained neural networks to anticipate embeddings to represent content. Their method gives several potent controls on the creative process. They first train their model, much like image synthesis models, so the inferred films’ content, such as their look or style, correspond to user-provided pictures or text cues (Fig. 1).

Figure 1: Video Synthesis With Guidance We introduce a method based on latent video diffusion models that synthesises videos (top and bottom) directed by text-or image-described content while preserving the original video’s structure (middle).

There’s a lot about ecology in frank Herbert’s dune saga and eco mysticism as well.


I will never make any money from Youtube and that is perfectly correct!
I will always get this message: “Your video is ineligible for monetization due to a copyright claim.“
And: “Ad revenue paid to copyright owner”

I am happy that Youtube allows my vidoes.