A new study presents a new neurocomputational model of the human brain, which might shed light on how the brain develops complex cognitive skills and advance neural artificial intelligence research. An international team of scientists from the Institut Pasteur and Sorbonne University in Paris, the CHU Sainte-Justine, Mila – Quebec Artificial Intelligence Institute, and the University of Montreal conducted the study.
Category: robotics/AI – Page 1,163
South Australian artificial intelligence (AI) company GoMicro is rolling out its new grain assessment technology in Australia, paving the way towards more consistent quality controls and stable grain and pulse prices.
Based at Flinders University’s high-tech New Venture Institute (NVI) at Tonsley Innovation District in Clovelly Park, Adelaide, GoMicro CEO Dr. Sivam Krish says the multi-grain assessor gives growers and domestic and export markets a quick and better way to grade crops, accurately testing more than 1,200 grains in one sample—compared to the existing scanner-based method which assesses about 200 well-separated grains at a time.
“GoMicro relies on the excellent quality of phone cameras and Amazon web services to deliver low-cost, high-precision quality grain and other produce assessments to farmers worldwide,” says Dr. Krish.
CAPE CANAVERAL, Fla — An unmanned U.S. military space plane landed early Saturday after spending a record 908 days in orbit for its sixth mission and conducting science experiments.
The solar-powered vehicle, which looks like a miniature space shuttle, landed at NASA’s Kennedy Space Center. Its previous mission lasted 780 days.
“Since the X-37B’s first launch in 2010, it has shattered records and provided our nation with an unrivaled capability to rapidly test and integrate new space technologies,” said Jim Chilton, a senior vice president for Boeing, its developer.
Recent progress in generative models has paved the way to a manifold of tasks that some years ago were only imaginable. With the help of large-scale image-text datasets, generative models can learn powerful representations exploited in fields such as text-to-image or image-to-text translation.
The recent release of Stable Diffusion and the DALL-E API led to great excitement around text-to-image generative models capable of generating complex and stunning novel images from an input descriptive text, similar to performing a search on the internet.
With the rising interest in the reverse task, i.e., image-to-text translation, several studies tried to generate captions from input images. These methods often presume a one-to-one correspondence between pictures and their captions. However, multiple images can be connected to and paired with a long text narrative, such as photos in a news article. Therefore, the need for illustrative correspondences (e.g., “travel” or “vacation”) rather than literal one-to-one captions (e.g., “airplane flying”).
Researchers from LP3 Laboratory in France developed a light-based technique for local material processing anywhere in the three-dimensional space of semiconductor chips. The direct laser writing of new functionalities opens the possibility to exploit the sub-surface space for higher integration densities and extra functions.
Semiconductors remain the backbone material of the electronics integrated with modern devices such as cellphones, cars, robots and many other intelligent devices. Driven by the continuous need for miniaturized and powerful chips, the current semiconductor manufacturing technologies are facing increasing pressure.
The dominating manufacturing technology, lithography, has strong limitations when addressing these challenges, given its surface processing nature. For this reason, a solution to fabricating structures under the wafer surfaces would be highly desirable so that the full space inside the materials could be exploited.
Artificial intelligence has helped design an invisibility cloak. The cloak could hide communication devices from detectors that use microwaves or infrared light.
How scientists are using virtual reality to create avatars, chatbots and even eternal digital entities.
Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.
For better or worse, the same is true about the very character of warfare. This is the reason why the Department of Defense – like its counterparts in China and Russia– is investing billions of dollars to develop and integrate AI into defense systems. It’s also the reason why DoD is now embracing initiatives that envision future technologies, including the next phase of AI – artificial general intelligence.
AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way that humans do. Unlike AI which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, transfer learning, abstraction, and causality. Of particular interest is the human ability to generalize from scanty or incomplete input.
Large models have improved performance on a wide range of modern computer vision and, in particular, natural language processing problems. However, issuing patches to adjust model behavior after deployment is a significant challenge in deploying and maintaining such models. Because of the distributed nature of the model’s representations, when a neural network produces an undesirable output, making a localized update to correct its behavior for a single or small number of inputs is difficult. A large language model trained in 2019 might assign a higher probability to Theresa May than Boris Johnson when prompted. Who is the Prime Minister of the United Kingdom?
An ideal model editing procedure would be able to quickly update the model parameters to increase the relative likelihood of Boris Johnson while not affecting the model output for unrelated inputs. This procedure would yield edits with reliability, successfully changing the model’s work on the problematic input (e.g., Who is the Prime Minister of the United Kingdom?); locality, affecting the model’s output for unrelated inputs (e.g., What sports team does Messi play for?); and generality, generating the correct output for inputs related to the edit input (e.g., Who is the Prime Minister of the United Kingdom?). Making such edits is as simple as fine-tuning with a new label on the single example to be corrected. However, fine-tuning on a single sample tends to overfit, even when the distance between the pre-and post-fine-tuning parameters is limited.
Overfitting causes both locality and generality failures. While fine-tuning the edit example and ongoing training on the training set improves locality, their experiments show that it still needs more generality. Furthermore, it necessitates continuous access to the entire training set during testing and is more computationally demanding. Recent research has looked into methods for learning to make model edits as an alternative. Researchers present a bi-level meta-learning objective for determining a model initialization for which standard fine-tuning on a single edit example yields valuable modifications.
The company’s technology essentially lets users take a pre-existing static 3D model and bring it to life. So, if you’re building a 3D forest in a virtual world, and you’ve got some 3D models of what you want the animals to look like, Anything World’s machine learning-powered tech will put a virtual skeleton in that animal, allowing it to move in a lifelike way.
The round comes amid waning interest in metaverse investments this year, according to data from Dealroom. Investment into startups tagged under “metaverse” on its platform dropped from a high of $2.8bn in Q2 to $446m in Q3, as low user interest affects previously hyped platforms and Mark Zuckerberg’s Meta lays off 11k employees.
Anything World cofounder Sebastian Hofer says that, while many investors have been seduced by the metaverse hype in the last year, his company is building a tool that’s also useful to clients who have no interest in jumping on the Zuckerberg bandwagon.