Toggle light / dark theme

“This is a really nice way of incorporating something you know about your physical system deep inside your machine-learning scheme. It goes far beyond just performing feature engineering on your data samples or simple inductive biases,” Schäfer says.

This generative classifier can determine what phase the system is in given some parameter, like temperature or pressure. And because the researchers directly approximate the probability distributions underlying measurements from the physical system, the classifier has system knowledge.

This enables their method to perform better than other machine-learning techniques. And because it can work automatically without the need for extensive training, their approach significantly enhances the computational efficiency of identifying .

Summary: Researchers use AI to reveal distinct cellular-level differences in the brains of men and women, focusing on white matter. These findings show AI can accurately identify sex-based brain patterns invisible to human eyes.

The study suggests that understanding these differences can enhance diagnostic tools and treatments for brain disorders. This research emphasizes the need for diversity in brain studies to ensure comprehensive insights into neurological diseases.

Micius is considered quantum’s “Sputnik” moment, prompting American policymakers to funnel hundreds of millions of dollars into quantum information science via the National Quantum Initiative. Bills such as the Innovation and Competition Act of 2021 have provided $1.5 billion for communications research, including quantum technology. The Biden Administration’s proposed 2024 budget includes $25 billion for “emerging technologies” including AI and quantum. Ultimately, quantum’s awesome computing power will soon render all existing cryptography obsolete, presenting a security migraine for governments and corporations everywhere.

Quantum’s potential to turbocharge AI also applies to the simmering technology competition between the world’s superpowers. In 2021, the U.S. Commerce Department added eight Chinese quantum computing organizations to its Entity List, claiming they “support the military modernization of the People’s Liberation Army” and adopt American technologies to develop “counter-stealth and counter-submarine applications, and the ability to break encryption.”

These restrictions dovetail with a raft of measures targeting China’s AI ambitions, including last year blocking Nvida from selling AI chips to Chinese firms. The question is whether competition between the world’s top two economies stymies overall progress on AI and quantum—or pushes each nation to accelerate these technologies. The answer could have far-reaching consequences.

Oscar Wilde once said that sarcasm was the lowest form of wit, but the highest form of intelligence. Perhaps that is due to how difficult it is to use and understand. Sarcasm is notoriously tricky to convey through text—even in person, it can be easily misinterpreted. The subtle changes in tone that convey sarcasm often confuse computer algorithms as well, limiting virtual assistants and content analysis tools.

Computer science researchers at the University of Central Florida have developed a sarcasm detector.

Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other is critical for success, but it is incredibly labor intensive.

That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion—either positive, negative or neutral—associated with text. While refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.

In an important step toward more effective gene therapies for brain diseases, researchers from the Broad Institute of MIT and Harvard have engineered a gene-delivery vehicle that uses a human protein to efficiently cross the blood-brain barrier and deliver a disease-relevant gene to the brain in mice expressing the human protein. Because the vehicle binds to a well-studied protein in the blood-brain barrier, the scientists say it has a good chance of working in patients.

Artificial neural networks (ANNs) show a remarkable pattern when trained on natural data irrespective of exact initialization, dataset, or training objective; models trained on the same data domain converge to similar learned patterns. For example, for different image models, the initial layer weights tend to converge to Gabor filters and color-contrast detectors. Many such features suggest global representation that goes beyond biological and artificial systems, and these features are observed in the visual cortex. These findings are practical and well-established in the field of machines that can interpret literature but lack theoretical explanations.

Localized versions of canonical 2D Fourier basis functions are the most observed universal features in image models, e.g. Gabor filters or wavelets. When vision models are trained on tasks like efficient coding, classification, temporal coherence, and next-step prediction goals, these Fourier features pop up in the model’s initial layers. Apart from this, Non-localized Fourier features have been observed in networks trained to solve tasks where cyclic wraparound is allowed, for example, modular arithmetic, more general group compositions, or invariance to the group of cyclic translations.

Researchers from KTH, Redwood Center for Theoretical Neuroscience, and UC Santa Barbara introduced a mathematical explanation for the rise of Fourier features in learning systems like neural networks. This rise is due to the downstream invariance of the learner that becomes insensitive to certain transformations, e.g., planar translation or rotation. The team has derived theoretical guarantees regarding Fourier features in invariant learners that can be used in different machine-learning models. This derivation is based on the concept that invariance is a fundamental bias that can be injected implicitly and sometimes explicitly into learning systems due to the symmetries in natural data.

Once a buzzword, neuromorphic engineering is gaining traction in the semiconductor industry.

Neuromorphic engineering is finally getting closer to market reality, propelled by the AI/ML-driven need for low-power, high-performance solutions.

Whether current initiatives result in true neuromorphic devices, or whether devices will be inspired by neuromorphic concepts, remains to be seen. But academic and industry researchers continue to experiment in the hopes of achieving significant improvements in computational performance using less energy.