Toggle light / dark theme

In a new report now published in Science, primary authors Zhen Zhu, Michal Papaj, and an international research team in physics, materials science, and condensed matter at the Jiao Tong University, China, Massachusetts Institute of Technology, U.S., and the Chinese Academy of Sciences discovered the Fermi surface of supercurrent-induced quasiparticles in a superconducting system for the first time. This discovery comes 50 years after the initial theoretical prediction was made by physicist Peter Fulde and revealed the impact of the finite Cooper pair momentum on the quasiparticle spectrum. In condensed matter physics, Cooper pairs are a pair of electrons with opposite spins loosely bound due to electron-lattice interactions. Superconductivity is based on their condensation to Bosonic states at low temperatures. The interplay of superconductivity and magnetic fields leads to the phenomenon of a ‘segmented Fermi surface. A leading author of this work, MIT Professor of Physics Liang Fu, outlined the significance of this discovery.

Supercurrent flow in a superconductor

Physicists assume that a sufficiently large supercurrent can close the energy gap in a superconductor and create gapless quasiparticles via the Doppler shift of quasiparticle energy. This is facilitated by the finite momentum of Cooper pairs in the presence of supercurrent flow in a superconductor, where the shift in Cooper pair momentum can result in a Doppler shift. In this work, Zhu et al. used quasiparticle interference to image the field-controlled Fermi surface of bismuth telluride (Bi2Te3) thin films proximitized by the superconductor niobium diselenide (NbSe2). A small applied in-plane magnetic field induced a screening supercurrent, which led to finite momentum pairing on the topological surface states of Bi2Te3. The scientists identified distinct interference patterns to indicate a gapless superconducting state with a segmented Fermi surface to reveal the strong impact of the finite Cooper pair momentum on the quasiparticle spectrum.

When a highly coherent light beam, such as that emitted by radars, is diffusely reflected on a surface with a rough structure (e.g., a piece of paper, white paint or a metallic surface), it produces a random granular effect known as the ‘speckle’ pattern. This effect results in strong fluctuations that can reduce the quality and interpretability of images collected by synthetic aperture radar (SAR) techniques.

SAR is an imaging method that can produce fine-resolution 2D or 3D images using a resolution-limited radar system. It is often employed to collect images of landscapes or object reconstructions, which can be used to create millimeter-to-centimeter scale models of the surface of Earth or other planets.

To improve the quality and reliability of SAR data, researchers worldwide have been trying to develop techniques based on deep neural networks that could reduce the speckle effect. While some of these techniques have achieved promising results, their performance is still not optimal.

Wireless sensing devices, tools that allow users to sense movements and remotely monitor activities or changes in specific environments, have many applications. For instance, they could be used for surveillance purposes as well as to track the sleep or physical activities of medical patients and athletes. Some videogame developers have also used wireless sensing systems to create more engaging sports or dance-related games.

Researchers at Florida State University, Trinity University and Rutgers University have recently developed Winect, a new wireless sensing system that can track the poses of humans in 3D as they perform a wide range of free-form physical activities. This system was introduced in a paper pre-published on arXiv and is set to be presented at the ACM Conference on Interactive, Mobile, Wearables and Ubiquitous Technologies (Ubi Comp) 2,021 one of the most renowned computer science events worldwide.

“Our research group has been conducting cutting-edge research in wireless sensing,” Jie Yang, one of the researchers who carried out the study, told TechXplore. “In the past, we have proposed several systems to use Wi-Fi signals to sense various human activities and objects, ranging from large-scale human activities, to small-scale finger movements, sleep monitoring and daily objects For example, we proposed two systems dubbed E-eyes and WiFinger, which are among the first work to utilize Wi-Fi sensing to distinguish various types of daily activity and finger gestures.”

Just over a year after launching its flagship product, Landing AI secured a $57 million round of Series A funding to continue building tools that enable manufacturers to more easily and quickly build and deploy artificial intelligence systems.

The company, started by former Google and Baidu AI guru Andrew Ng, developed LandingLens, a visual inspection tool that applies AI and deep learning to find product defects faster and more accurately.

Ng says industries should adopt a data-centric approach to building AI, which provides a more efficient way for manufacturers to teach an AI model what to do, using no code/low code capabilities, which include just a few mouse clicks to build advanced AI models in less than a day.

https://youtube.com/watch?v=1_Mcp-YjPmQ&feature=share

This video gives and overview of human neuroscience and applies it to the design of an artificial general intelligence named Eta.

Go to www.startengine.com/orbai to own shares in the future of AI.
Check out https://www.orbai.ai/about-us.htm for details on the company, tech, patents, products and more.

What we usually think of as Artificial Intelligence today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities — is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet. What we do have is called Deep Learning, that has fundamental limitations that will not allow it to become AGI.

For an AI to pass the threshold of human intelligence, and become an artificial general intelligence requires an AI to have the ability to see, hear, and experience its environment. It needs to be able to learn that environment, to organize it’s memory non-locally and store abstract concepts in a distributed architecture so it can model it’s environment, events, and people in it.

As promised, Walmart has started doing fully driverless box truck deliveries in partnership with startup Gatik between its own locations on a fixed 7-mile loop, the companies announced. Despite those limitations, the route in Bentonville, Arkansas involves “intersections, traffic lights and merging on dense urban roads,” the companies said. It’s another shot of good news for the progress of self-driving vehicles after GM’s cruise launched its self-driving taxis into testing last week.

The Gatik trucks are bringing grocery orders from a Walmart fulfilment center (dark store) to a nearby Walmart Neighborhood Market grocery store in Bentonville, the host city of the company’s headquarters. The route covers the “middle mile” transportation of goods between warehouses and stores. The program effectively got launched following the December 2020 approval by the Arkansas State Highway Commission, and has been driverless since this summer.

Gatik, a Silicon Valley-based developer of robotic technology to handle “middle-mile” deliveries from distribution centers to stores, has begun hauling goods for Walmart in autonomous trucks without a human backup at the wheel for the first time.

Gatik, which has been making delivery runs for Walmart since 2019 in the retail giant’s Bentonville, Arkansas, hometown, is operating two fully autonomous trucks that are hauling goods on a fixed, 7.1-mile route between an e-commerce distribution facility there to a Walmart Neighborhood Market store. This new phase started in August and is the first time any autonomous trucking company has operated commercial delivery routes without a human backup, the companies said.

Full Story:

The proposed telescope would be powerful enough to detect distant planets 10 billion times fainter than their hosting star.

Astronomers have proposed a telescope that would far exceed the capabilities of Hubble.

The National Academies of Sciences, Engineering, and Medicine just released its Decadal Survey on Astronomy and Astrophysics, also known as Astro 2020. The report outlines plans for the next decade of investment in astronomical equipment and projects in the U.S.

One of the real standout recommendations in the survey, DigitalTrends reports, is for a “Great Observatory” designed to replace the ailing Hubble Space Telescope, which encountered several technical problems this year due to its decades-old hardware.

Full Story: