Menu

Blog

Page 1227

Oct 26, 2023

AI ‘breakthrough’: neural net has human-like ability to generalize language

Posted by in categories: innovation, robotics/AI

A neural-network-based artificial intelligence outperforms ChatGPT at quickly folding new words into its lexicon, a key aspect of human intelligence.

Oct 26, 2023

Human-like systematic generalization through a meta-learning neural network

Posted by in category: robotics/AI

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training… More.


Over 35 years ago, when Fodor and Pylyshyn raised the issue of systematicity in neural networks1, today’s models19 and their language skills were probably unimaginable. As a credit to Fodor and Pylyshyn’s prescience, the systematicity debate has endured. Systematicity continues to challenge models11,12,13,14,15,16,17,18 and motivates new frameworks34,35,36,37,38,39,40,41. Preliminary experiments reported in Supplementary Information 3 suggest that systematicity is still a challenge, or at the very least an open question, even for recent large language models such as GPT-4. To resolve the debate, and to understand whether neural networks can capture human-like compositional skills, we must compare humans and machines side-by-side, as in this Article and other recent work7,42,43. In our experiments, we found that the most common human responses were algebraic and systematic in exactly the ways that Fodor and Pylyshyn1 discuss. However, people also relied on inductive biases that sometimes support the algebraic solution and sometimes deviate from it; indeed, people are not purely algebraic machines3,6,7. We showed how MLC enables a standard neural network optimized for its compositional skills to mimic or exceed human systematic generalization in a side-by-side comparison. MLC shows much stronger systematicity than neural networks trained in standard ways, and shows more nuanced behaviour than pristine symbolic models. MLC also allows neural networks to tackle other existing challenges, including making systematic use of isolated primitives11,16 and using mutual exclusivity to infer meanings44.

Our use of MLC for behavioural modelling relates to other approaches for reverse engineering human inductive biases. Bayesian approaches enable a modeller to evaluate different representational forms and parameter settings for capturing human behaviour, as specified through the model’s prior45. These priors can also be tuned with behavioural data through hierarchical Bayesian modelling46, although the resulting set-up can be restrictive. MLC shows how meta-learning can be used like hierarchical Bayesian models for reverse-engineering inductive biases (see ref. 47 for a formal connection), although with the aid of neural networks for greater expressive power. Our research adds to a growing literature, reviewed previously48, on using meta-learning for understanding human49,50,51 or human-like behaviour52,53,54. In our experiments, only MLC closely reproduced human behaviour with respect to both systematicity and biases, with the MLC (joint) model best navigating the trade-off between these two blueprints of human linguistic behaviour. Furthermore, MLC derives its abilities through meta-learning, where both systematic generalization and the human biases are not inherent properties of the neural network architecture but, instead, are induced from data.

Continue reading “Human-like systematic generalization through a meta-learning neural network” »

Oct 26, 2023

Google plans “next generation series of models” for 2024

Posted by in category: robotics/AI

According to Alphabet’s CEO, Google’s Gemini is just the first of a series of next-generation AI models that Google plans to bring to market in 2024.

With the multimodal Gemini AI model, Google wants to at least catch up with OpenAI’s GPT-4. The model is expected to be released later this year. In the recent quarterly earnings call, Alphabet CEO Sundar Pichai said that Google is “getting the model ready”.

Gemini will be released in different sizes and with different capabilities, and will be used for all internal products immediately, Pichai said. So it is likely that Gemini will replace Google’s current PaLM-2 language model. Developers and cloud customers will get access through Vertex AI.

Oct 26, 2023

Researchers develop ‘Woodpecker’: A groundbreaking solution to AI’s hallucination problem

Posted by in categories: innovation, robotics/AI

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

A group of artificial intelligence researchers from the University of Science and Technology of China (USTC) and Tencent YouTu Lab have developed an innovative framework, coined as “Woodpecker”, designed to correct hallucinations in multimodal large language models (MLLMs).

The research paper outlining this groundbreaking approach was published on the pre-print server arXiv, under the title Woodpecker: Hallucination Correction for Multimodal Large Language Models.

Oct 25, 2023

The Unlikely Solution to Microplastic Pollution: Magnets?

Posted by in categories: biotech/medical, computing, health, transportation

Magnets are magnificent. Made of iron, aluminum, nickel, cobalt, and various other metals, they’re used in compasses for navigation, in medical imaging machines to see inside the human body, in kitchens to keep cabinets and refrigerators closed, in computers to store data and in new high-speed “hyperloop” trains that can travel at speeds of up to 76 miles per hour.

For environmentalists, however, the most exciting use yet for magnets might be a newly discovered application out of Australia’s Royal Melbourne Institute of Technology, otherwise known as RMIT University: Using magnets, researchers there have discovered a novel way of removing harmful microplastics from water.

“[Microplastics] can take up to 450 years to degrade, are not detectable and removable through conventional treatment systems, resulting in millions of tons being released into the sea every year,” co-lead research Nasir Mahmood said in a statement. “This is not only harmful for aquatic life, but also has significant negative impacts on human health.”

Oct 25, 2023

How strep bacteria outsmart your immune system and why it matters for treatment

Posted by in category: biotech/medical

🦠🔬💊 https://www.news-medical.net/news/20231025/How-strep-bacteri…-treatment


Researchers discover that Group A streptococcal infections alter immunoglobulin G (IgG) homeostasis to evade the immune system, affecting the transition from local to systemic infections. The study also raises concerns about the effectiveness of antibody-based therapies, as the bacteria’s virulence factors can degrade therapeutic antibodies.

Oct 25, 2023

Tech Guy Says Books Will Be Replaced by AI-Powered “Thunks”

Posted by in category: robotics/AI

The AI guys aren’t alright.


Anaconda co-founder and CEO Peter Wang predicts that books will soon be replaced by AI-generated, interactive content trips dubbed “thunks.”

Oct 25, 2023

Researchers in China developed a hallucination correction engine for AI models

Posted by in category: robotics/AI

Scientists from the University of Science and Technology of China and Tencent’s YouTu Lab have developed a tool to combat “hallucination” by artificial intelligence (AI) models.

Oct 25, 2023

Magnetic Coulomb Phase in the Spin Ice Ho2Ti2O7

Posted by in category: quantum physics

Year 2009 This is actually proof of string theory existence through magnetic monopoles.


Neutron scattering measurements on two spin-ice compounds show evidence for magnetic monopoles.

Oct 25, 2023

Global STEM Initiative Chapter of Uganda

Posted by in categories: cybercrime/malcode, education, robotics/AI

“Meet Kelvin Dafiaghor, a distinguished luminary in the fields of education and technology. As the Founder and Director of Ogba Educational Clinic in Nigeria, he has dedicated a decade to integrating AI and STEM education into African learning, particularly excelling in robotics and AI. His commitment extends globally, showcased by his participation in prestigious events like FINTECH Abu Dhabi in 2018 and a high-level conference in Morocco in 2019, where he advocated fervently for innovation and artificial intelligence as transformative forces in Africa. In 2021, he made a lasting impact at GISEC Dubai, emphasizing the role of AI in cybersecurity. Additionally, as the Regional Manager for Global STEM Initiative, he’s passionate about advancing STEM education worldwide. #gsiuganda #comingsoon Andrew Webb-Buffington KELVIN OGBA DAFIAGHORJosselin LavigneKasule RaphaelLorraine Tsitsi MajiriLily R. ASONGFACIvan Peter OtimKimani NyoikeGlobal STEM Initiative (GSI)RIIS LLC.