Menu

Blog

Page 1230

Oct 26, 2023

Your brain hallucinates your conscious reality

Posted by in categories: business, neuroscience, policy

Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Right now, billions of neurons in your brain are working together to generate a conscious experience — and not just any conscious experience, your experience of the world around you and of yourself within it. How does this happen? According to neuroscientist Anil Seth, we’re all hallucinating all the time; when we agree about our hallucinations, we call it “reality.” Join Seth for a delightfully disorienting talk that may leave you questioning the very nature of your existence.

Continue reading “Your brain hallucinates your conscious reality” »

Oct 26, 2023

Scientists Explain What Actually Happens When You Die

Posted by in categories: innovation, robotics/AI

How I dub all my channels into 28 Languages with AI:
clonedub.com.

For copyright matters, please contact: [email protected].

Continue reading “Scientists Explain What Actually Happens When You Die” »

Oct 26, 2023

FBI, CISA Warn of Rising AvosLocker Ransomware Attacks Against Critical Infrastructure

Posted by in category: cybercrime/malcode

⚠️ ALERT: AvosLocker #ransomware targets US critical infrastructure. Recent joint advisory from CISA and FBI exposes their tactics — using open-source tools and stealthy techniques to compromise networks.

Read more 👉 https://thehackernews.com/2023/10/fbi-cisa-warn-of-rising-avoslocker.htm


The FBI and CISA issue advisory on AvosLocker ransomware gang. They use open-source tools, leave minimal traces.

Oct 26, 2023

Researchers uncover mechanism for treating dangerous liver condition

Posted by in category: biotech/medical

A study spearheaded by Oregon State University has shown why certain polyunsaturated fatty acids work to combat a dangerous liver condition, opening a new avenue of drug research for a disease that currently has no FDA-approved medications.

Scientists led by Oregon State’s Natalia Shulzhenko, Andrey Morgun and Donald Jump used a technique known as multi-omic network analysis to identify the mechanism through which dietary omega 3 supplements alleviated nonalcoholic steatohepatitis, usually abbreviated to NASH.

The mechanism involves betacellulin, a protein growth factor that plays multiple positive roles in the body but also contributes to , or scarring, and the progression to cirrhosis and .

Oct 26, 2023

Critical Flaw in NextGen’s Mirth Connect Could Expose Healthcare Data

Posted by in category: futurism

🚑 Healthcare IT professionals, take note.

A critical RCE vulnerability (CVE-2023–43208) has been uncovered in Mirth Connect, a healthcare data integration platform.

Read:

Continue reading “Critical Flaw in NextGen’s Mirth Connect Could Expose Healthcare Data” »

Oct 26, 2023

AI ‘breakthrough’: neural net has human-like ability to generalize language

Posted by in categories: innovation, robotics/AI

A neural-network-based artificial intelligence outperforms ChatGPT at quickly folding new words into its lexicon, a key aspect of human intelligence.

Oct 26, 2023

Human-like systematic generalization through a meta-learning neural network

Posted by in category: robotics/AI

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training… More.


Over 35 years ago, when Fodor and Pylyshyn raised the issue of systematicity in neural networks1, today’s models19 and their language skills were probably unimaginable. As a credit to Fodor and Pylyshyn’s prescience, the systematicity debate has endured. Systematicity continues to challenge models11,12,13,14,15,16,17,18 and motivates new frameworks34,35,36,37,38,39,40,41. Preliminary experiments reported in Supplementary Information 3 suggest that systematicity is still a challenge, or at the very least an open question, even for recent large language models such as GPT-4. To resolve the debate, and to understand whether neural networks can capture human-like compositional skills, we must compare humans and machines side-by-side, as in this Article and other recent work7,42,43. In our experiments, we found that the most common human responses were algebraic and systematic in exactly the ways that Fodor and Pylyshyn1 discuss. However, people also relied on inductive biases that sometimes support the algebraic solution and sometimes deviate from it; indeed, people are not purely algebraic machines3,6,7. We showed how MLC enables a standard neural network optimized for its compositional skills to mimic or exceed human systematic generalization in a side-by-side comparison. MLC shows much stronger systematicity than neural networks trained in standard ways, and shows more nuanced behaviour than pristine symbolic models. MLC also allows neural networks to tackle other existing challenges, including making systematic use of isolated primitives11,16 and using mutual exclusivity to infer meanings44.

Our use of MLC for behavioural modelling relates to other approaches for reverse engineering human inductive biases. Bayesian approaches enable a modeller to evaluate different representational forms and parameter settings for capturing human behaviour, as specified through the model’s prior45. These priors can also be tuned with behavioural data through hierarchical Bayesian modelling46, although the resulting set-up can be restrictive. MLC shows how meta-learning can be used like hierarchical Bayesian models for reverse-engineering inductive biases (see ref. 47 for a formal connection), although with the aid of neural networks for greater expressive power. Our research adds to a growing literature, reviewed previously48, on using meta-learning for understanding human49,50,51 or human-like behaviour52,53,54. In our experiments, only MLC closely reproduced human behaviour with respect to both systematicity and biases, with the MLC (joint) model best navigating the trade-off between these two blueprints of human linguistic behaviour. Furthermore, MLC derives its abilities through meta-learning, where both systematic generalization and the human biases are not inherent properties of the neural network architecture but, instead, are induced from data.

Continue reading “Human-like systematic generalization through a meta-learning neural network” »

Oct 26, 2023

Google plans “next generation series of models” for 2024

Posted by in category: robotics/AI

According to Alphabet’s CEO, Google’s Gemini is just the first of a series of next-generation AI models that Google plans to bring to market in 2024.

With the multimodal Gemini AI model, Google wants to at least catch up with OpenAI’s GPT-4. The model is expected to be released later this year. In the recent quarterly earnings call, Alphabet CEO Sundar Pichai said that Google is “getting the model ready”.

Gemini will be released in different sizes and with different capabilities, and will be used for all internal products immediately, Pichai said. So it is likely that Gemini will replace Google’s current PaLM-2 language model. Developers and cloud customers will get access through Vertex AI.

Oct 26, 2023

Researchers develop ‘Woodpecker’: A groundbreaking solution to AI’s hallucination problem

Posted by in categories: innovation, robotics/AI

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

A group of artificial intelligence researchers from the University of Science and Technology of China (USTC) and Tencent YouTu Lab have developed an innovative framework, coined as “Woodpecker”, designed to correct hallucinations in multimodal large language models (MLLMs).

The research paper outlining this groundbreaking approach was published on the pre-print server arXiv, under the title Woodpecker: Hallucination Correction for Multimodal Large Language Models.

Oct 25, 2023

The Unlikely Solution to Microplastic Pollution: Magnets?

Posted by in categories: biotech/medical, computing, health, transportation

Magnets are magnificent. Made of iron, aluminum, nickel, cobalt, and various other metals, they’re used in compasses for navigation, in medical imaging machines to see inside the human body, in kitchens to keep cabinets and refrigerators closed, in computers to store data and in new high-speed “hyperloop” trains that can travel at speeds of up to 76 miles per hour.

For environmentalists, however, the most exciting use yet for magnets might be a newly discovered application out of Australia’s Royal Melbourne Institute of Technology, otherwise known as RMIT University: Using magnets, researchers there have discovered a novel way of removing harmful microplastics from water.

“[Microplastics] can take up to 450 years to degrade, are not detectable and removable through conventional treatment systems, resulting in millions of tons being released into the sea every year,” co-lead research Nasir Mahmood said in a statement. “This is not only harmful for aquatic life, but also has significant negative impacts on human health.”