Toggle light / dark theme

Many scientific problems can be formulated as sparse regression, i.e., regression onto a set of parameters when there is a desire or expectation that some of the parameters are exactly zero or do not substantially contribute. This includes many problems in signal and image processing, system identification, optimization, and parameter estimation methods such as Gaussian process regression. Sparsity facilitates exploring high-dimensional spaces while finding parsimonious and interpretable solutions. In the present work, we illustrate some of the important ways in which sparse regression appears in plasma physics and point out recent contributions and remaining challenges to solving these problems in this field. A brief review is provided for the optimization problem and the state-of-the-art solvers, especially for constrained and high-dimensional sparse regression.

Having more tools helps; having the right tools is better. Utilizing multiple dimensions may simplify difficult problems—not only in science fiction but also in physics—and tie together conflicting theories.

For example, Einstein’s theory of —which resides in the fabric of space-time warped by planetary or other massive objects—explains how gravity works in most cases. However, the theory breaks down under such as those existing in black holes and cosmic primordial soups.

An approach known as superstring theory could use another dimension to help bridge Einstein’s theory with , solving many of these problems. But the necessary evidence to support this proposal has been lacking.

Researchers at University of Tokyo, JTS PRESTO, Ludwig Maximilians Universität and Kindai University recently demonstrated the modulation of an electron source by applying laser light to a single fullerene molecule. Their study, featured in Physical Review Letters, could pave the way for the development of better performing computers and microscopic imaging devices.

“By irradiating a sharp metallic needle with , we had previously demonstrated optical control of electron emission sites on a scale of approximately 10 nm,” Hirofumi Yanagisawa, one of the researchers who carried out the study, told Phys.org. “The optical control was achieved using plasmonic effects, but it was technically difficult to miniaturize such an electron source using the same principle. We were seeking a way to miniaturize the electron source and we hit upon the idea of using a and its molecular orbitals.”

Yanagisawa and his colleagues set out to realize their idea experimentally using electrons emitted from molecules on a sharp metallic needle. However, they were well-aware of the difficulties they would encounter, due to unresolved difficulties associated with the use of electron emissions from molecule-covered needles.

Anti AI / AI ethics clowns now pushing.gov for some criminalization, on cue.


A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.

OpenAI “has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment,” said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).

Calling for “independent oversight and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

Derek Thompson published an essay in the Atlantic last week that pondered an intriguing question: “When we’re looking at generative AI, what are we actually looking at?” The essay was framed like this: “Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition.


He closes his essay with one last analogy, one that really makes you think about the-as-of-yet unforeseen consequences of generative AI technologies — good or bad: Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity … fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds … Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities.

Thompson concisely passes judgment about what he thinks generative AI will do to us in his final sentence: I think this technology will expand our minds. And I think it will burn us.

Thompson’s essay inadvertently but quite poetically illustrates why it’s so difficult to predict events and consequences too far into the future. Scientists and philosophers have studied the process of how knowledge is expanded from a current state to novel directions of thought and knowledge.