Toggle light / dark theme

High harmonic generation (HHG) is a highly non-linear phenomenon where a system (for example, an atom) absorbs many photons of a laser and emits photons of much higher energy, whose frequency is a harmonic (that is, a multiple) of the incoming laser’s frequency. Historically, the theoretical description of this process was addressed from a semi-classical perspective, which treated matter (the electrons of the atoms) quantum-mechanically, but the incoming light classically. According to this approach, the emitted photons should also behave classically.

Despite this evident theoretical mismatch, the description was sufficient to carry out most of the experiments, and there was no apparent need to change the framework. Only in the last few years has the scientific community begun to explore whether the emitted light could actually exhibit a quantum behavior, which the semi-classical theory might have overlooked. Several theoretical groups, including the Quantum Optics Theory group at ICFO, have already shown that, under a full quantum description, the HHG process emits light with quantum features.

However, experimental validation of such predictions remained elusive until, recently, a team led by the Laboratoire d’Optique Appliquée (CNRS), in collaboration with ICREA Professor at ICFO Jens Biegert and other multiple institutions (Institut für Quantenoptik—Leibniz Universität Hannover, Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Friedrich-Schiller-University Jena), demonstrated the quantum optical properties of high-harmonic generation in semiconductors. The results, appearing in PRX Quantum, align with the previous theoretical predictions about HHG.

The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.

Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can’t perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency.

Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip.

Physicists at Loughborough University have made an exciting breakthrough in understanding how to fine-tune the behavior of electrons in quantum materials poised to drive the next generation of advanced technologies.

Quantum materials, like and strontium ruthenates, exhibit remarkable properties such as superconductivity and magnetism, which could revolutionize areas like computing and energy storage.

However, these materials are not yet widely used in real-world applications due to the challenges in understanding the complex behavior of their electrons—the particles that carry electrical charge.

An optical lattice clock is a type of atomic clock that can be 100 times more accurate than cesium atomic clocks, the current standard for defining “seconds.” Its precision is equivalent to an error of approximately one second over 10 billion years. Owing to this exceptional accuracy, the optical lattice clock is considered a leading candidate for the next-generation “definition of the second.”

Professor Hidetoshi Katori from the Graduate School of Engineering at The University of Tokyo has achieved a milestone by developing the world’s first compact, robust, ultrahigh-precision optical clock with a device capacity of 250L.

As part of this development, the physics package for spectroscopic measurement of atomic clock transitions, along with the laser and control system used for trapping and spectroscopy of atoms, was miniaturized. This innovation reduced the device volume from the traditional 920 to 250 L, approximately one-quarter of the previous size.

Tesla hasn’t unveiled its next generation human robot in the form of the app named GEN-3 Teslabot, bringing with it significant advancements in the field of humanoid robotics, merging state-of-the-art engineering with a design inspired by human anatomy. This next-generation robot demonstrates exceptional dexterity and precision, setting a new benchmark for what humanoid robots can accomplish. From catching a tennis ball mid-air to envisioning tasks like threading a needle, the Teslabot is poised to reshape how robots interact with and adapt to the world around them.

Wouldn’t it be great if robots didn’t just assemble cars or vacuum your living room but perform tasks requiring the finesse of human hands—threading a needle, playing a piano, or even catching a tennis ball mid-air. It sounds like science fiction, doesn’t it? Yet, Tesla’s latest innovation, the GEN-3 Teslabot, is bringing us closer to that reality. With its human-inspired design and new engineering, this robot is redefining what we thought machines could do.

But what makes the Teslabot so extraordinary? It’s not just the flashy demonstrations or its sleek design. It’s the way Tesla has managed to replicate human dexterity and precision in a machine, giving it the potential to tackle tasks we once thought only humans could handle. From its 22 degrees of freedom in the hand to its vision-driven precision, it’s a glimpse of what’s to come. Let’s dive into the details of Tesla’s GEN-3 Teslabot and explore how it’s pushing the boundaries of what’s possible.

👋👋 ✍️ Fabio Remondino et al.


This paper presents a critical analysis of image-based 3D reconstruction using neural radiance fields (NeRFs), with a focus on quantitative comparisons with respect to traditional photogrammetry. The aim is, therefore, to objectively evaluate the strengths and weaknesses of NeRFs and provide insights into their applicability to different real-life scenarios, from small objects to heritage and industrial scenes. After a comprehensive overview of photogrammetry and NeRF methods, highlighting their respective advantages and disadvantages, various NeRF methods are compared using diverse objects with varying sizes and surface characteristics, including texture-less, metallic, translucent, and transparent surfaces. We evaluated the quality of the resulting 3D reconstructions using multiple criteria, such as noise level, geometric accuracy, and the number of required images (i.e.

Particles called neutrons are typically very content inside atoms. They stick around for billions of years and longer inside some of the atoms that make up matter in our universe. But when neutrons are free and floating alone outside of an atom, they start to decay into protons and other particles. Their lifetime is short, lasting only about 15 minutes.

Physicists have spent decades trying to measure the precise lifetime of a neutron using two techniques, one involving bottles and the other beams. But the results from the two methods have not matched: they differ by about 9 seconds, which is significant for a particle that only lives about 15 minutes.

Now, in a new study published in the journal Physical Review Letters, a team of scientists has made the most precise measurement yet of a neutron’s lifetime using the bottle technique. The experiment, known as UCNtau (for Ultra Cold Neutrons tau, where tau refers to the neutron lifetime), has revealed that the neutron lives 14.629 minutes with an uncertainty of 0.005 minutes. This is a factor of two more precise than previous measurements made using either of the methods. While the results do not solve the mystery of why the bottle and beam methods disagree, they bring scientists closer to an answer.