Menu

Blog

Latest posts

Jul 25, 2024

Enhanced Database AIDS Wildfire Managers in Predicting Fires

Posted by in category: biotech/medical

“There is a tremendous amount of interest in what enables wildfire ignitions and what can be done to prevent them,” said Dr. Erica Fleishman. “This database increases the ability to access relevant information and contribute to wildfire preparedness and prevention.”


Can wildfires be predicted in advance to allow for safeguards that can prevent their spread? This is what a recent study published in Earth System Science Data hopes to address as a team of researchers have developed a database to help firefighters and power companies establish protocol for implementing strategies that holds the potential for helping to reduce the spread of a wildfire before it gets too large.

Wildfire closure sign seen in the Oregon Cascades in September 2020. (Credit: Oregon State University)

Continue reading “Enhanced Database AIDS Wildfire Managers in Predicting Fires” »

Jul 25, 2024

LLNL researchers uncover key to resolving long-standing ICF hohlraum drive deficit

Posted by in category: energy

In inertial confinement fusion experiments, lasers at Lawrence Livermore National Laboratory’s National Ignition Facility focus on a tiny fuel capsule suspended inside a cylindrical x-ray oven called a hohlraum. (Photo: Jason Laurea)

Jul 25, 2024

Researchers Develop Method for High-Capacity, Secure Quantum Communication Using Qudits

Posted by in categories: computing, quantum physics

The Quantum Insider (TQI) is the leading online resource dedicated exclusively to Quantum Computing.

Jul 25, 2024

OpenAI announces SearchGPT, its AI-powered search engine

Posted by in category: robotics/AI

SearchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

It’s the start of what could become a meaningful threat to Google, which has rushed to bake in AI features across its search engine, fearing that users will flock to competing products that offer the tools first. It also puts OpenAI in more direct competition with the startup Perplexity, which bills itself as an AI “answer” engine. Perplexity has recently come under criticism for an AI summaries feature that publishers claimed was directly ripping off their work.

Jul 25, 2024

AI could enhance almost two-thirds of British jobs, claims Google

Posted by in categories: employment, robotics/AI

Research commissioned by Google estimates 31% of jobs would be insulated from AI and 61% radically transformed by it.

Jul 25, 2024

Did abstract mathematics exist before the big bang?

Posted by in categories: cosmology, mathematics

Did abstract mathematics, such as Pythagoras’s theorem, exist before the big bang?

Simon McLeish Lechlade, Gloucestershire, UK

The notion of the existence of mathematical ideas is a complex one.

Continue reading “Did abstract mathematics exist before the big bang?” »

Jul 25, 2024

Network properties determine neural network performance

Posted by in categories: information science, mapping, mathematics, mobile phones, robotics/AI, transportation

Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.

Jul 25, 2024

JWST And Hubble Agree on The Universe’s Expansion, And It’s a Major Problem

Posted by in category: futurism

New data confirms that variation in our Universe’s expansion rate (or the Hubble ‘tension’) is not an error in measurement.

Jul 25, 2024

Cancer Drug Shows Promise for Autism Cognitive Function

Posted by in categories: biotech/medical, robotics/AI

Summary: A new experimental cancer drug could ease cognitive difficulties for those with Rett syndrome, a rare autism-linked disorder, by enhancing brain cell functions. The drug, ADH-503, improves the activity of microglia, which are crucial for maintaining neural networks.

Researchers found that healthy microglia restored synapse function in brain organoids mimicking Rett syndrome. This breakthrough suggests potential therapies for Rett syndrome and other neurological conditions.

Jul 25, 2024

Using AI to train AI: Model collapse could be coming for LLMs, say researchers

Posted by in categories: internet, mathematics, robotics/AI

Using AI-generated datasets to train future generations of machine learning models may pollute their output, a concept known as model collapse, according to a new paper published in Nature. The research shows that within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

Generative AI tools such as (LLMs) have grown in popularity and have been primarily trained using human-generated inputs. However, as these AI models continue to proliferate across the Internet, computer-generated content may be used to train other AI models—or themselves—in a recursive loop.

Ilia Shumailov and colleagues present mathematical models to illustrate how AI models may experience model collapse. The authors demonstrate that an AI may overlook certain outputs (for example, less common lines of text) in training data, causing it to train itself on only a portion of the dataset.

Page 1 of 11,50312345678Last