Toggle light / dark theme

Artificial neural networks (ANNs) show a remarkable pattern when trained on natural data irrespective of exact initialization, dataset, or training objective; models trained on the same data domain converge to similar learned patterns. For example, for different image models, the initial layer weights tend to converge to Gabor filters and color-contrast detectors. Many such features suggest global representation that goes beyond biological and artificial systems, and these features are observed in the visual cortex. These findings are practical and well-established in the field of machines that can interpret literature but lack theoretical explanations.

Localized versions of canonical 2D Fourier basis functions are the most observed universal features in image models, e.g. Gabor filters or wavelets. When vision models are trained on tasks like efficient coding, classification, temporal coherence, and next-step prediction goals, these Fourier features pop up in the model’s initial layers. Apart from this, Non-localized Fourier features have been observed in networks trained to solve tasks where cyclic wraparound is allowed, for example, modular arithmetic, more general group compositions, or invariance to the group of cyclic translations.

Researchers from KTH, Redwood Center for Theoretical Neuroscience, and UC Santa Barbara introduced a mathematical explanation for the rise of Fourier features in learning systems like neural networks. This rise is due to the downstream invariance of the learner that becomes insensitive to certain transformations, e.g., planar translation or rotation. The team has derived theoretical guarantees regarding Fourier features in invariant learners that can be used in different machine-learning models. This derivation is based on the concept that invariance is a fundamental bias that can be injected implicitly and sometimes explicitly into learning systems due to the symmetries in natural data.

Once a buzzword, neuromorphic engineering is gaining traction in the semiconductor industry.

Neuromorphic engineering is finally getting closer to market reality, propelled by the AI/ML-driven need for low-power, high-performance solutions.

Whether current initiatives result in true neuromorphic devices, or whether devices will be inspired by neuromorphic concepts, remains to be seen. But academic and industry researchers continue to experiment in the hopes of achieving significant improvements in computational performance using less energy.

The letter, which is being sent to tech companies around the world this week, marks an escalation of the music group’s attempts to stop the melodies, lyrics and images from copyrighted songs and artists being used by tech companies to produce new versions or to train systems to create their own music.

The letter says that Sony Music and its artists “recognise the significant potential and advancement of artificial intelligence” but adds that “unauthorised use… in the training, development or commercialisation of AI systems deprives [Sony] of control over and appropriate compensation”

It says: “This letter serves to put you on notice directly, and reiterate, that [Sony’s labels] expressly prohibit any use of [their] content.”

Building a useful quantum computer in practice is incredibly challenging. Significant improvements are needed in the scale, fidelity, speed, reliability, and programmability of quantum computers to fully realize their benefits. Powerful tools are needed to help with the many complex physics and engineering challenges that stand in the way of useful quantum computing.

AI is fundamentally transforming the landscape of technology, reshaping industries, and altering how we interact with the digital world. The ability to take data and generate intelligence paves the way for groundbreaking solutions to some of the most challenging problems facing society today. From personalized medicine to autonomous vehicles, AI is at the forefront of a technological revolution that promises to redefine the future, including many challenging problems standing in the way of useful quantum computing.

Quantum computers will integrate with conventional supercomputers and accelerate key parts of challenging problems relevant to government, academia, and industry. This relationship is described in An Introduction to Quantum Accelerated Supercomputing. The advantages of integrating quantum computers with supercomputers are reciprocal, and this tight integration will also enable AI to help solve the most important challenges standing in the way of useful quantum computing.

An Oakland, California, school district is the first in the US to transition to a 100% electric school bus system with vehicle-to-grid (V2G) technology.

Modern student transportation platform Zum has provided Oakland Unified School District with a fleet of 74 electric school buses and bidirectional chargers. Utility Pacific Gas and Electric (PG&E) supplied 2.7 megawatts (MW) of load to Zum’s Oakland EV-ready facility. The fleet will be managed through Zum’s AI-enabled technology platform.

“Oakland becoming the first in the nation to have a 100% electric school bus fleet is a huge win for the Oakland community and the nation as a whole,” said Kim Raney, executive director of transportation at Oakland Unified School District. “The families of Oakland are disproportionately disadvantaged and affected by high rates of asthma and exposure to air pollution from diesel fuels.”

As AI systems have grown in sophistication, so has their capacity for deception, according to a new analysis from researchers at Massachusetts Institute of Technology (MIT). Dr Peter Park, an AI existential safety researcher at MIT and author of the research, tells Ian Sample about the different examples of deception he uncovered, and why they will be so difficult to tackle as long as AI remains a black box.

How to listen to podcasts: everything you need to know

Listen to the Guardian’s Black Box series all about humans and artificial intelligence.