New photonic deep neural network could also analyze audio, video, and other data.

Researchers from Japan design a tunable physical reservoir device based on dielectric relaxation at an electrode-ionic liquid interface.
In the near future, more and more artificial intelligence processing will need to take place on the edge — close to the user and where the data is collected rather than on a distant computer server. This will require high-speed data processing with low power consumption. Physical reservoir computing is an attractive platform for this purpose, and a new breakthrough from scientists in Japan just made this much more flexible and practical.
Physical reservoir computing (PRC), which relies on the transient response of physical systems, is an attractive machine learning framework that can perform high-speed processing of time-series signals at low power. However, PRC systems have low tunability, limiting the signals it can process. Now, researchers from Japan present ionic liquids as an easily tunable physical reservoir device that can be optimized to process signals over a broad range of timescales by simply changing their viscosity.
Artificial general intelligence (AGI) is back in the news thanks to the recent introduction of Gato from DeepMind. As much as anything, AGI invokes images of the Skynet (of Terminator lore) that was originally designed as threat analysis software for the military, but it quickly came to see humanity as the enemy. While fictional, this should give us pause, especially as militaries around the world are pursuing AI-based weapons.
However, Gato does not appear to raise any of these concerns. The deep learning transformer model is described as a “generalist agent” and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations and action specifications. It has been referred to as the Swiss Army Knife of AI models. It is clearly much more general than other AI systems developed thus far and in that regard appears to be a step towards AGI.
Multimodal systems are not new — as evidenced by GPT-3 and others. What is arguably new is the intent. By design, GPT-3 was intended to be a large language model for text generation. That it could also produce images from captions, generate programming code and other functions were add-on benefits that emerged after the fact and often to the surprise of AI experts.
Catriona Campbell, a UK-based artificial intelligence expert, argues we could soon be raising artificially intelligence virtual children inside the metaverse.
She dubs these hypothetical offspring “Tamagotchi children,” in a reference to the popular virtual pets from the 1990s.
In her new book “AI by Design: A Plan For Living With Artificial Intelligence,” The Telegraph reports, Campbell argues that these virtual kiddos could be an environmentally friendly and cheaper answer to overpopulation and limited resources.