Toggle light / dark theme

Mark Zuckerberg demos a tool for building virtual worlds using voice commands

Meta, formerly known as Facebook, today showed off a prototype of an AI system that enables people to generate or import things into a virtual world just by using voice commands. The company sees the tool, which is called “Builder Bot,” as an “exploratory concept” that shows AI’s potential for creating new worlds in the metaverse. Meta CEO Mark Zuckerberg showed off the prototype at the Meta AI: Inside the Lab event on Wednesday in a pre-recorded demo video.

In the video, Zuckerberg explained the process of building parts of a virtual world by describing them. He begins with the prompt, “let’s go to a park.” The bot then creates a 3D landscape of a park with green grass and trees. Zuckerberg then says “actually, let’s go to the beach,” after which the bot replaces the current landscape with a new one of sand and water. He then says he wants to add clouds and notes that everything is AI-generated. Zuckerberg then changes up the landscape by saying he’d rather have altocumulus clouds, which is meant to demonstrate how specific the voice commands can be.

He then points to a specific area of the water and says “let’s add an island over there,” and then the bot creates one. Zuckerberg then issues several other voice commands, such as adding trees and a picnic blanket. He also adds the sound of seagulls and whales. At one point, he even adds a hydrofoil — a nod to one of his favorite hobbies, which later turned into a meme.

Runway Researchers Unveil Gen-1: A New Generative AI Model That Uses Language And Images To Generate New Videos Out of Existing Ones

The current media environment is filled with visual effects and video editing. As a result, as video-centric platforms have gained popularity, demand for more user-friendly and effective video editing tools has skyrocketed. However, because video data is temporal, editing in the format is still difficult and time-consuming. Modern machine learning models have shown considerable promise in enhancing editing, although techniques frequently compromise spatial detail and temporal consistency. The emergence of potent diffusion models trained on huge datasets recently caused a sharp increase in the quality and popularity of generative techniques for picture synthesis. Simple users may produce detailed pictures using text-conditioned models like DALL-E 2 and Stable Diffusion with only a text prompt as input. Latent diffusion models effectively synthesize pictures in a perceptually constrained environment. They research generative models suitable for interactive applications in video editing due to the development of diffusion models in picture synthesis. Current techniques either propagate adjustments using methodologies that calculate direct correspondences or, by finetuning on each unique video, re-pose existing picture models.

They try to avoid costly per-movie training and correspondence calculations for quick inference for every video. They suggest a content-aware video diffusion model with a configurable structure trained on a sizable dataset of paired text-image data and uncaptioned movies. They use monocular depth estimations to represent structure and pre-trained neural networks to anticipate embeddings to represent content. Their method gives several potent controls on the creative process. They first train their model, much like image synthesis models, so the inferred films’ content, such as their look or style, correspond to user-provided pictures or text cues (Fig. 1).

Figure 1: Video Synthesis With Guidance We introduce a method based on latent video diffusion models that synthesises videos (top and bottom) directed by text-or image-described content while preserving the original video’s structure (middle).

Snapple® Launches fAIct Generator Powered by Technology from ChatGPT Creator OpenAI

In Celebration of the Recent 20-Year Anniversary of Snapple’s Real Facts®, Snapple is Putting its Fact Writing into Fans’ Hands.

FRISCO, Texas, Feb. 8, 2023 /PRNewswire/ — Snapple®, the iconic beverage brand that delivers fun and flavorful teas and juice drinks, is proud to announce the launch of the Snapple fAIct Generator, an AI-powered tool that makes it easy to create facts about any topic. Celebrating 20-years of Snapple Real Facts®, facts found under every Snapple bottle cap, the Snapple fAIct Generator puts fact-creation in the hands of the brand’s fans. To help share the news of this new tool, Snapple used ChatGPT to write this press release, with some light edits to make it more Snapple-y.

An AI ‘Engineer’ Has Now Designed 100 Chips

Lurking inside your next gadget may be a chip unlike those of the past. People used to do all the complex silicon design work, but for the first time, AI is helping to build new chips for data centers, smartphones, and IoT devices. AI firm Synopsys has announced that its DSO.ai tool has successfully aided in the design of 100 chips, and it expects that upward trend to continue.

Companies like STMicroelectronics and SK Hynix have turned to Synopsys to accelerate semiconductor designs in an increasingly competitive environment. The past few years have seen demand for new chips increase while materials and costs have rocketed upward. Therefore, companies are looking for ways to get more done with less, and that’s what tools like DSO.ai are all about.

The tool can search design spaces, telling its human masters how best to arrange components to optimize power, performance, and area, or PPA as it’s often called. Among those 100 AI-assisted chip designs, companies have seen up to a 25% drop in power requirements and a 3x productivity increase for engineers. SK Hynix says a recent DSO.ai project resulted in a 15% cell area reduction and a 5% die shrink.

8 Candidate Alien Signals From 5 Stars Found by AI Algorithm with Dr. Cherry Ng and Peter Ma

Head to https://squarespace.com/eventhorizon to save 10% off your first purchase of a website or domain using code eventhorizon.
Did We Find Them? 8 Candidate Alien Signals Found with a new AI Algorithm by SETI.

A deep-learning search for technosignatures of 820 nearby stars.
https://seti.berkeley.edu/ml_gbt/MLSETI_NatAstron_arxiv3.pdf.

YouTube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/join.
Podcast: https://anchor.fm/john-michael-godier/subscribe.
Apple: https://apple.co/3CS7rjT

More JMG
https://www.youtube.com/c/JohnMichaelGodier.

Want to support the channel?
Patreon: https://www.patreon.com/EventHorizonShow.

Follow us at other places!

Automated Pre-Play Analysis of American Football Formations Using Deep Learning

Annotation and analysis of sports videos is a time-consuming task that, once automated, will provide benefits to coaches, players, and spectators. American football, as the most watched sport in the United States, could especially benefit from this automation. Manual annotation and analysis of recorded videos of American football games is an inefficient and tedious process. Currently, most college football programs focus on annotating offensive formations to help them develop game plans for their upcoming games. As a first step to further research for this unique application, we use computer vision and deep learning to analyze an overhead image of a football play immediately before the play begins. This analysis consists of locating individual football players and labeling their position or roles, as well as identifying the formation of the offensive team.

A deep reinforcement learning model that allows AI agents to track odor plumes

For a long time, scientists and engineers have drawn inspiration from the amazing abilities of animals and have sought to reverse engineer or reproduce these in robots and artificial intelligence (AI) agents. One of these behaviors is odor plume tracking, which is the ability of some animals, particularly insects, to home in on the source of specific odors of interest (e.g., food or mates), often over long distances.

A new study by researchers at University of Washington and University of Nevada, Reno has taken an innovative approach using (ANNs) in understanding this remarkable ability of flying insects. Their work, recently published in Nature Machine Intelligence, exemplifies how is driving groundbreaking new scientific insights.

“We were motivated to study a complex biological behavior, -tracking, that flying insects (and other animals) use to find food or mates,” Satpreet H. Singh, the lead author on the study, told Tech Xplore. “Biologists have experimentally studied many aspects of insect plume tracking in great detail, as it is a critical behavior for insect survival and reproduction. ”.