Toggle light / dark theme

The current talk addresses a crucial problem on how compositionality can be naturally developed in cognitive agents by having iterative sensory-motor interactions with the environment.

The talk highlights a dynamic neural network model, so-called the multiple timescales recurrent neural network (MTRNN) model, which has been applied to a set of experiments on developmental learning of compositional actions performed by a humanoid robot made by Sony. The experimental results showed that a set of reusable behavior primitives were developed in the lower level network that is characterized by its fast timescale dynamics while sequential combinations of these primitives were learned in the higher level, which is characterized by its slow timescale dynamics.

This result suggests that adequate functional hierarchy necessary of generating compositional actions can be developed by utilizing timescale differences imposed at different levels of the network. The talk will also introduce our recent results on applications of an extended model of MTRNN to the problem of learning to recognize dynamic visual patterns on a pixel level. The experimental results indicated that dynamic visual images of compositional human actions can be recognized by self-organizing functional hierarchy when both spatial and temporal constraints are adequately imposed on the network activity. The dynamical systems’ mechanisms for development of the higher-order cognition will be discussed upon reviewing the aforementioned research results.

Jun Tani — Professor, Department of Electrical Engineering, KAIST

Most times when we think of deepfakes, we think of the myriad negative applications. From pornography to blackmail to politics, deepfakes are a product of machine learning. They create a lie that is so realistic that it is hard to believe it is not the real thing. In a society plagued by fake news, deepfakes have the potential to do a substantial amount of harm.

But a recent team of researchers found another use for deepfakes — to deepfake the mind. And using machine learning to simulate artificial neural data in this way may make a world of difference for those with disabilities.

For people with full body paralysis, the body can seemingly become a prison. Communicating and the simplest of tasks may appear to be an insurmountable challenge. But even if the body is frozen, the mind may be very active. Brain-computer interfaces (BCIs) offer a way for these patients to interact with the world.

BCIs do not rely on muscle or eye movements. Instead, the user is trained to manipulate an object using the power of thought alone. BCIs can allow a fully paralyzed person to operate a wheelchair by just thinking, to move a cursor on a computer screen, or even play pinball by moving the paddles with their mind. BCIs can be freeing for people with this type of paralysis. It can also be used to treat depression or to rehabilitate the brain.

Full Story:

The original 2017 transformer model was designed for natural language processing (NLP), where it achieved SOTA results. Its performance intrigued machine learning researchers, who have since successfully adapted the attention-based architecture to perception tasks in other modalities, such as the classification of images, video and audio. While transformers have shown their power and potential in these areas, achieving SOTA performance requires training a separate model for each task. Producing a single transformer model capable of processing multiple modalities and datasets and sharing its learnable parameters has thus emerged as an attractive research direction.

To this end, a team from Google Research, University of Cambridge and Alan Turing Institute has proposed PolyViT; a single transformer architecture co-trained on image, audio and video that is parameter-efficient and learns representations that generalize across multiple domains.

The PolyViT design is motivated by the idea that human perception is inherently multimodal and previous studies that have demonstrated transformers’ ability to operate on any modality that can be tokenized. PolyViT shares a single transformer encoder across different tasks and modalities, enabling up to a linear reduction in parameters with the number of tasks.

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.

For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy—like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment—one that has now finally come to fruition.

In research published Nov. 30 in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.

Disney’s AI research division has developed a hybrid method for movie-quality facial simulation, combining the strengths of facial neural rendering with the consistency of a CGI-based approach. The pending paper is titled Rendering with Style: Combining Traditional and Neural Approaches for High Quality Face Rendering, and is previewed in a new 10-minute video at the […].

The Indian edtech giant Byju’s keeps getting bigger, having raised more than $4.5 billion since it was founded 10 years ago. This month the company made clear its ambitious research agenda: to achieve the science-fiction dream of building next-generation teaching aids with artificial intelligence.

Specifically, the company announced a new research-and-development hub, with offices in Silicon Valley, London and Bangalore, that will work on applying the latest findings from artificial intelligence and machine learning to new edtech products. The new hub, called Byju’s Lab, will also work on “moonshots” of developing new forms of digital tutoring technology, said Dev Roy, chief innovation and learning officer for BYJU’s, in a recent interview with EdSurge.

“Edtech is one of the slowest adopters of AI so far, compared to some of the other industries out there,” Roy said. “Even in health care, what DeepMind has done with mapping the proteins of DNA—nobody’s doing that in the education sector.”

Forget losing your job to robots, Scientists have created robots that can reproduce. ‘Xenobots’ are capable of ‘self-replicating’ themselves. They are made up of stem cells taken from frogs. Astounded? Watch this report by Palki Sharma for the details.

#Gravitas #Robots #Xenobots.

About Channel:

WION-The World is One News, examines global issues with in-depth analysis. We provide much more than the news of the day. Our aim to empower people to explore their world. With our Global headquarters in New Delhi, we bring you news on the hour, by the hour. We deliver information that is not biased. We are journalists who are neutral to the core and non-partisan when it comes to the politics of the world. People are tired of biased reportage and we stand for a globalised united world. So for us the World is truly One.