Toggle light / dark theme

With the spread of the omicron variant, not everyone can or is eager to travel for the winter break. But what if virtual touch could bring you assurance that you were not alone?

At the USC Viterbi School of Engineering, computer scientist and roboticist Heather Culbertson has been exploring various methods to simulate touch. As part of a new study, Culbertson a senior author on this study, along with researchers at Stanford, her alma mater, wanted to see if two companions (platonic or romantic), could communicate and express care and emotion remotely. People perceive a partner’s true intentions through in-person touch an estimated 57 percent of the time. When interacting with a device that simulated human touch, respondents were able to discern the touch’s intention 45 percent of the time. Thus, devices in this study appear to perform with approximately 79 percent accuracy of perceived human touch.

Our sense of touch is unique. In fact, people have a “touch language” says Culbertson, the WiSE Gabilan Assistant Professor and Assistant Professor of Computer Science and Aerospace and Mechanical Engineering at USC. Thus, she says, creating virtual touch that people can direct towards their loved ones is quite complex—not only do we have differences in our comfort with and levels of “touchiness” but we also may have a distinct way of communicating different emotions such sympathy, love or sadness. The challenge for the researchers was to create an algorithm that can be flexible enough to incorporate the many dimensions of touch.

Lightelligence, a Boston-based photonics company, revealed the world’s first small form-factor, photonics-based computing device, meaning it uses light to perform compute operations. The company claims the unit is “hundreds of times faster than a typical computing unit, such as NVIDIA RTX 3080.” 350 times faster, to be exact, but that only applies to certain types of applications.


However, the PACE achieves that coveted specialization through an added field of computing — which not only makes the system faster, it makes it incredibly more efficient. While traditional semiconductor systems have the issue of excess heat that results from running current through nanometre-level features at sometimes ludicrous frequencies, the photonic system processes its workloads with zero Ohmic heating — there’s no heat produced from current resistance. Instead, it’s all about light.

Lightelligence is built around its CEO’s Ph.d. thesis — and the legitimacy it provides. This is so because when “Deep Learning with Coherent Nanophotonic Circuits” was published in Nature in 2017, Lightelligence’s CEO and founder Yichen Chen had already foreseen a path for optical circuits to be at the forefront of Machine Learning computing efforts. By 2020, the company had already received $100 million in funding and employed around 150 employees. A year later, Lightspeed has achieved a dem product that it says is “hundreds of times faster than a typical computing unit, such as NVIDIA RTX 3080”. 350 times faster, to be clear.

The PACE’s debut aims to charm enough capital to comfortably reach its goal of launching a pilot AI accelerator product to the market in 2022. That’s still only a stretch goal in the company’s vision, however, its goal is to develop and distribute a mass-market, photonics-based hardware solution as early as 2023, targeting the Cloud AI, Finance, and Retail markets. Considering how Lightelligence managed to improve the company’s 2019 COMET design performance by a factor of a million with PACE in a span of two years, it’ll be interesting to see where their efforts take them when it comes to launching.

WASHINGTON – The National Geospatial-Intelligence Agency has selected a team of commercial and academic partners to build an artificial intelligence system with synthetic data, which will further help the agency determine how it builds machine learning algorithms moving forward.

Orbital Insight was issued a Phase II Small Business Innovation Research contract by the NGA, the company announced. Dec. 16. It will collaborate with Rendered.ai and the University of California, Berkeley, to develop a computer vision model.

As the organization charged with analyzing satellite imagery for the intelligence community, NGA has put increased emphasis on using AI for its mission. The agency sees human-machine pairing as critical for its success, with machine learning algorithms taking over the rote task of processing the torrent of satellite data to find potential intelligence and freeing up human operators to do more high level analysis and tasks.

For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted.

If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders.

This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs—irrespective of what else was in the image.

Researchers at Kobe University and Osaka University have successfully developed artificial intelligence technology that can extract hidden equations of motion from regular observational data and create a model that is faithful to the laws of physics.

This technology could enable researchers to discover the hidden equations of motion behind for which the laws were considered unexplainable. For example, it may be possible to use physics-based knowledge and simulations to examine ecosystem sustainability.

The research group consisted of Associate Professor YAGUCHI Takaharu and Ph.D. student CHEN Yuhan (Graduate School of System Informatics, Kobe University), and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University).

A black hole laser in analogues of gravity amplifies Hawking radiation, which is unlikely to be measured in real black holes, and makes it observable. There have been proposals to realize such black hole lasers in various systems. However, no progress has been made in electric circuits for a long time, despite their many advantages such as high-precision electromagnetic wave detection. Here we propose a black hole laser in Josephson transmission lines incorporating metamaterial elements capable of producing Hawking-pair propagation modes and a Kerr nonlinearity due to the Josephson nonlinear inductance. A single dark soliton obeying the nonlinear Schrödinger equation produces a black hole-white hole horizon pair that acts as a laser cavity through a change in the refractive index due to the Kerr effect.

Physicists from Trinity have unlocked the secret that explains how large groups of individual “oscillators”—from flashing fireflies to cheering crowds, and from ticking clocks to clicking metronomes—tend to synchronize when in each other’s company.

Their work, just published in the journal Physical Review Research, provides a mathematical basis for a phenomenon that has perplexed millions—their newly developed equations help explain how individual randomness seen in the and in electrical and computer systems can give rise to synchronization.

We have long known that when one clock runs slightly faster than another, physically connecting them can make them tick in time. But making a large assembly of clocks synchronize in this way was thought to be much more difficult—or even impossible, if there are too many of them.

Lightning is one of the most destructive forces of nature, as in 2020 when it sparked the massive California Lightning Complex fires, but it remains hard to predict. A new study led by the University of Washington shows that machine learning—computer algorithms that improve themselves without direct programming by humans—can be used to improve lightning forecasts.

Better lightning forecasts could help to prepare for potential wildfires, improve safety warnings for lightning and create more accurate long-range climate models.

“The best subjects for machine learning are things that we don’t fully understand. And what is something in the atmospheric sciences field that remains poorly understood? Lightning,” said Daehyun Kim, a UW associate professor of atmospheric sciences. “To our knowledge, our work is the first to demonstrate that machine learning algorithms can work for lightning.”

WASHINGTON, D.C. — Today, the U.S. Department of Energy (DOE) announced $5.7 million for six projects that will implement artificial intelligence methods to accelerate scientific discovery in nuclear physics research. The projects aim to optimize the overall performance of complex accelerator and detector systems for nuclear physics using advanced computational methods.

“Artificial intelligence has the potential to shorten the timeline for experimental discovery in nuclear physics,” said Timothy Hallman, DOE Associate Director of Science for Nuclear Physics. “Particle accelerator facilities and nuclear physics instrumentation face a variety of technical challenges in simulations, control, data acquisition, and analysis that artificial intelligence holds promise to address.”

The six projects will be conducted by nuclear physics researchers at five DOE national laboratories and four universities. Projects will include the development of deep learning algorithms to identify a unique signal for a conjectured, very slow nuclear process known as neutrinoless double beta decay. This decay, if observed, would be at least ten thousand times more rare than the rarest known nuclear decay and could demonstrate how our universe became dominated by matter rather than antimatter. Supported efforts also include AI-driven detector design for the Electron-Ion Collider accelerator project under construction at Brookhaven National Laboratory that will probe the internal structure and forces of protons and neutrons that compose the atomic nucleus.