We’re on a journey to advance and democratize artificial intelligence through open source and open science.
“Compared with other traditional methods, the proposed has lower computational complexity, faster operation speed, weak influence of light, and strong ability to locate dirt,” the research group said. “The improved path planning algorithm used in this study greatly improves the efficiency of UAV inspection, saves time and resources, reduces operation and maintenance costs, and improves the corresponding operation and maintenance level of photovoltaic power generation.”
The novel approach uses mathematical morphologies for image processing, such as image enhancement, sharpening, filtering, and closing operations. It also uses image histogram equalization and edge detection, among other methods, to find the dusted spot. For path optimization, it uses an improved version of the A (A-star) algorithm.
Recent advances in the field of artificial intelligence (AI) and computing have enabled the development of new tools for creating highly realistic media, virtual reality (VR) environments and video games. Many of these tools are now widely used by graphics designers, animated film creators and videogame developers worldwide.
One aspect of virtual and digitally created environments that can be difficult to realistically reproduce is fabrics. While there are already various computational tools for digitally designing realistic fabric-based items (e.g., scarves, blankets, pillows, clothes, etc.), creating and editing realistic renderings of these fabrics in real-time can be challenging.
Researchers at Shandong University and Nanjing University recently introduced a new lightweight artificial neural network for the real-time rendering of woven fabrics. Their proposed network, introduced in a paper published as part of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers ‘24, works by encoding the patterns and parameters of fabrics as a small latent vector, which can later be interpreted by a decoder to produce realistic representations of various fabrics.
Nvidia, the world’s largest company by value, is reportedly developing a new artificial intelligence (AI) chip based on its flagship product B200 for the China market.
The mass production of the new chip, which may be called B20, will commence later this year while shipments will start in the second quarter of next year, Reuters reported, citing sources familiar with the matter.
The report said Nvidia will work with Inspur, one of its distributors in mainland China. However, Inspur said it has not started any business and cooperation related to B20 as of now. It said the Reuters report is not true.
Both the predictive power and the memory storage capability of an artificial neural network called a reservoir computer increase when time delays are added into how the network processes signals, according to a new model.
A reservoir computer—a type of artificial neural network—can use information about a system’s past to predict the system’s future. Reservoir computers are far easier to train than their more general counterpart, recurrent neural networks. However, researchers have yet to develop a way to determine the optimal reservoir-computer construction for memorizing and forecasting the behavior a given system. Recently, Seyedkamyar Tavakoli and André Longtin of the University of Ottawa, Canada, took a step toward solving that problem by demonstrating a way to enhance the memory and prediction capabilities of a reservoir computer [1]. Their demonstration could, for example, allow researchers to make a chatbot or virtual assistant, such as ChatGPT, using a reservoir computer, a possibility that so far has been largely unexplored.
For those studying time-series-forecasting methods—those that can predict the future outcomes of complex systems using historical time-stamped data—the recurrent neural network is king [2]. Recurrent neural networks contain a “hidden state” that stores information about features of the system being modeled. The information in the hidden state is updated every time the network gains new information about the system and is then fed into an algorithm that is used to predict what will happen next to the system.
People who struggle with facial recognition can find forming relationships a challenge, leading to mental health issues and social anxiety. A new study provides insights into prosopagnosia or face blindness, a condition that impairs facial recognition and affects approximately 1 in 50 people.
The researchers scanned the brains of more than 70 study participants as they watched footage from the popular TV series “Game of Thrones.” Half of the participants were familiar with the show’s famously complex lead characters and the other half had never seen the series.
When lead characters appeared on screen, MRI scans showed that in neurotypical participants who were familiar with the characters, brain activity increased in regions of the brain associated with non-visual knowledge about the characters, such as who they are and what we know about them.