Toggle light / dark theme

Toward a Complete Theory of Crystal Vibrations

A new set of equations captures the dynamical interplay of electrons and vibrations in crystals and forms a basis for computational studies.

Although a crystal is a highly ordered structure, it is never at rest: its atoms are constantly vibrating about their equilibrium positions—even down to zero temperature. Such vibrations are called phonons, and their interaction with the electrons that hold the crystal together is partly responsible for the crystal’s optical properties, its ability to conduct heat or electricity, and even its vanishing electrical resistance if it is superconducting. Predicting, or at least understanding, such properties requires an accurate description of the interplay of electrons and phonons. This task is formidable given that the electronic problem alone—assuming that the atomic nuclei stand still—is already challenging and lacks an exact solution. Now, based on a long series of earlier milestones, Gianluca Stefanucci of the Tor Vergata University of Rome and colleagues have made an important step toward a complete theory of electrons and phonons [1].

At a low level of theory, the electron–phonon problem is easily formulated. First, one considers an arrangement of massive point charges representing electrons and atomic nuclei. Second, one lets these charges evolve under Coulomb’s law and the Schrödinger equation, possibly introducing some perturbation from time to time. The mathematical representation of the energy of such a system, consisting of kinetic and interaction terms, is the system’s Hamiltonian. However, knowing the exact theory is not enough because the corresponding equations are only formally simple. In practice, they are far too complex—not least owing to the huge number of particles involved—so that approximations are needed. Hence, at a high level, a workable theory should provide the means to make reasonable approximations yielding equations that can be solved on today’s computers.

Einstein’s most famous equation has been used to make matter from light particles

According to Einstein’s theory of special relativity, first published in 1905, light can be converted into matter when two light particles collide with intense force. But, try as they might, scientists have never been able to do this. No one could create the conditions needed to transform light into matter — until now.

Physicists claim to have generated matter from pure light for the first time — a spectacular display of Einstein’s most famous equation.

This is a significant breakthrough, overcoming a theoretical barrier that seemed impossible only a few decades ago.

Neuromorphic computing could lead to self-learning machines

German scientists present a method by which AI could be trained much more efficiently.

In the last couple of years, research institutions have been working on finding new concepts of how computers can process data in the future. One of these concepts is known as neuromorphic computing. Neuromorphic computing models may sound similar to artificial neural networks but have little to do with them.

Compared to traditional artificial intelligence algorithms, which require significant amounts of data to be trained on before they can be effective, neuromorphic computing systems can learn and adapt on the fly.

Machine learning masters massive data sets

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.

Equally efficient on laptops and supercomputers, the highly scalable solves hardware bottlenecks that prevent processing information from data-rich applications in , , social media networks, national security science and earthquake research, to name just a few.

“We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger than previously possible on a given hardware,” said Ismael Boureima, a computational physicist at Los Alamos National Laboratory. Boureima is first author of the paper in The Journal of Supercomputing on the record-breaking algorithm.

How x-ray vision is becoming a reality | Tara Boroushaki | TEDxMIT Salon

This talk is about how you can use wireless signals and fuse them with vision and other sensing modalities through AI algorithms to give humans and robots X-ray vision to see objects hidden inside boxes or behind other object.

Tara Boroushaki is a Ph.D student at MIT. Her research focuses on fusing radio frequency (RF) sensing with vision through artificial intelligence. She designs algorithms and builds systems that leverage such fusion to enable capabilities that were not feasible before in applications spanning augmented reality, virtual reality, robotics, smart homes, and smart manufacturing. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Time reveals 100 most influential AI leaders

The discourse around Artificial Intelligence (AI) often hinges on the paradoxical duality of its nature. While it mirrors human cognition to an extraordinary extent, its capacity to transcend our limitations is awe-inspiring and unsettling. The heart of this growing field lies in the use of algorithms and the people who control these powerful computational tools.

This brings us to TIME’s recent endeavor—the TIME100 Most Influential People in AI. This meticulously curated list casts light on the people pushing AI’s boundaries and shaping its ethical framework. So when TIME magazine drops a list… More.


Source: TIME

We’re All Programmers Now

Generative AI and other easy-to-use software tools can help employees with no coding background become adept programmers, or what the authors call citizen developers. By simply describing what they want in a prompt, citizen developers can collaborate with these tools to build entire applications—a process that until recently would have required advanced programming fluency.

Information technology has historically involved builders (IT professionals) and users (all other employees), with users being relatively powerless operators of the technology. That way of working often means IT professionals struggle to meet demand in a timely fashion, and communication problems arise among technical experts, business leaders, and application users.

Citizen development raises a critical question about the ultimate fate of IT organizations. How will they facilitate and safeguard the process without placing too many obstacles in its path? To reject its benefits is impractical, but to manage it carelessly may be worse. In this article the authors share a road map for successfully introducing citizen development to your employees.

Experts alone can’t handle AI — social scientists explain why the public needs a seat at the table

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

/* */