Toggle light / dark theme

Artificial Intelligence (AI) Researchers from Cornell University Propose a Novel Neural Network Framework to Address the Video Matting Problem

Image and video editing are two of the most popular applications for computer users. With the advent of Machine Learning (ML) and Deep Learning (DL), image and video editing have been progressively studied through several neural network architectures. Until very recently, most DL models for image and video editing were supervised and, more specifically, required the training data to contain pairs of input and output data to be used for learning the details of the desired transformation. Lately, end-to-end learning frameworks have been proposed, which require as input only a single image to learn the mapping to the desired edited output.

Video matting is a specific task belonging to video editing. The term “matting ” dates back to the 19th century when glass plates of matte paint were set in front of a camera during filming to create the illusion of an environment that was not present at the filming location. Nowadays, the composition of multiple digital images follows similar proceedings. A composite formula is exploited to shade the intensity of the foreground and background of each image, expressed as a linear combination of the two components.

Although really powerful, this process has some limitations. It requires an unambiguous factorization of the image into foreground and background layers, which are then assumed to be independently treatable. In some situations like video matting, hence a sequence of temporal-and spatial-dependent frames, the layers decomposition becomes a complex task.

Researchers From Stanford And Microsoft Have Proposed An Artificial Intelligence (AI) Approach That Uses Declarative Statements As Corrective Feedback For Neural Models With Bugs

The methods currently used to correct systematic issues in NLP models are either fragile or time-consuming and prone to shortcuts. Humans, on the other hand, frequently reprimand one another using natural language. This inspired recent research on natural language patches, which are declarative statements that enable developers to deliver corrective feedback at the appropriate level of abstraction by either modifying the model or adding information the model may be missing.

Instead of relying solely on labeled examples, there is a growing body of research on using language to provide instructions, supervision, and even inductive biases to models, such as building neural representations from language descriptions (Andreas et al., 2018; Murty et al., 2020; Mu et al., 2020), or language-based zero-shot learning (Brown et al., 2020; Hanjie et al., 2022; Chen et al., 2021). For corrective purposes, when the user interacts with an existing model to enhance it, language has yet to be properly utilized.

The neural language patching model has two heads: a gating head that determines if a patch should be applied and an interpreter head that forecasts results based on the information in the patch. The model is trained in two steps: first on a tagged dataset and then through task-specific fine-tuning. A set of patch templates are used to create patches and synthetic labeled samples during the second fine-tuning step.

Boost Your Brain 150% With An AI Chip | From Elon Musk! in 2023

This video is about, Boost Your Brain 150% With An AI Chip From Elon Musk! In 2023.

Remember the movie Limitless, now you can do it with a computer chip.

How often have you thought of eating all your notes so you can memorize them? Or how many times have you wished you could read others or take a photo from your eyes? Thanks to Elon musk; some of these childhood wishes might come to life.

For this, you will only need a small brain chip!

What is this AI chip?

It can be introduced as a brain-computer interface or a BCIs can create both internal and external brain connections. They are able to read brain activity, convert it into information, and then relay that information back to the brain or outside.

A deep learning model that generates nonverbal social behavior for robots

Researchers at the Electronics and Telecommunications Research Institute (ETRI) in Korea have recently developed a deep learning-based model that could help to produce engaging nonverbal social behaviors, such as hugging or shaking someone’s hand, in robots. Their model, presented in a paper pre-published on arXiv, can actively learn new context-appropriate social behaviors by observing interactions among humans.

“Deep learning techniques have produced interesting results in areas such as computer vision and ,” Woo-Ri Ko, one of the researchers who carried out the study, told TechXplore. “We set out to apply to , specifically by allowing robots to learn from human-human interactions on their own. Our method requires no prior knowledge of human behavior models, which are usually costly and time-consuming to implement.”

The (ANN)-based architecture developed by Ko and his colleagues combines the Seq2Seq (sequence-to-sequence) model introduced by Google researchers in 2014 with generative adversarial networks (GANs). The new architecture was trained on the AIR-Act2Act dataset, a collection of 5,000 human-human interactions occurring in 10 different scenarios.

PepsiCo exec says for AI, 2023 will be year of ‘hope and focus’

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

When it comes to artificial intelligence (AI), the past year has been aspirational, but ultimately unsuccessful, says Athina Kanioura, who was named PepsiCo’s first chief strategy and transformation officer in September 2020. But she is optimistic about 2023.

“Think of how we started with the metaverse and the use of AI, suddenly it crumbled into pieces,” she told VentureBeat. “In AI, we tend to see what doesn’t work the first time, then we lose hope — but I think 2023 should be a year of hope and focus for AI.”

Eerie Video of Bizarre Sheep Phenomenon Has The World Running in Circles

To be a sheep is to blindly follow the crowd. But is a sheep’s sheepiness really enough to make an entire flock walk around in a circle non-stop for days on end?

That’s a mystery the internet has pondered for about a week now and solving it has proved more difficult than you’d expect.

On 17 November, a news outlet run by the Chinese state, called People’s Daily, shared an eerie video on Twitter of dozens of sheep in China’s Inner Mongolia Autonomous Region a strolling around in tight, almost perfect concentric circles.

Inspired by Manta Ray Biomechanics: “Butterfly Bot” Is Fastest Swimming Soft Robot Ever

Researchers have developed an energy-efficient soft robot that can swim more than four times faster than previous swimming soft robots by taking inspiration from the biomechanics of the manta ray. Developed at North Carolina State University.

Founded in 1,887 and part of the University of North Carolina system, North Carolina State University (also referred to as NCSU, NC State, or just State) is a public land-grant research university in Raleigh, North Carolina. It forms one of the corners of the Research Triangle together with Duke University in Durham and The University of North Carolina at Chapel Hill.

/* */