Toggle light / dark theme

Researchers at Duke University have demonstrated the first attack strategy that can fool industry-standard autonomous vehicle sensors into believing nearby objects are closer (or further) than they appear without being detected.

The research suggests that adding optical 3D capabilities or the ability to share data with nearby cars may be necessary to fully protect from attacks.

The results will be presented Aug. 10–12 at the 2022 USENIX Security Symposium, a top venue in the field.

Circa 2021


Seoul National University Hospital completed a liver transplant procedure using a robot and a laparoscope that left no huge abdominal scars for both the donor and recipient.

Suh Kyung-suk, a professor on the liver transplant team, noted that the new surgical procedure also reduces complications associated with the lungs and scars and shortens the recovery time.

The use of a robot and a laparoscope that allowed a transplant without opening the donor’s abdomen was the world’s first.

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”

A Microsoft Research team has introduced a “simple yet effective” method that dramatically improves stability in transformer models with just a few lines of code change.

Large-scale transformers have achieved state-of-the-art performance on a wide range of natural language processing (NLP) tasks, and in recent years have also demonstrated their impressive few-shot and zero-shot learning capabilities, making them a popular architectural choice for machine learning researchers. However, despite soaring parameter counts that now reach billions and even trillions, the layer depth of transformers remains restricted by problems with training instability.

In their new paper DeepNet: Scaling Transformers to 1,000 Layers, the Microsoft team proposes DeepNorm, a novel normalization function that improves the stability of transformers to enable scaling that is an order of magnitude deeper (more than 1,000 layers) than previous deep transformers.

Researchers at Johns Hopkins University have developed a new shock-absorbing material that is super lightweight, yet offers the protection of metal. The stuff could make for helmets, armor and vehicle parts that are lighter, stronger and, importantly, reusable.

The key to the new material is what are known as liquid crystal elastomers (LCEs). These are networks of elastic polymers in a liquid crystalline phase that give them a useful combination of elasticity and stability. LCEs are normally used to make actuators and artificial muscles for robotics, but for the new study the researchers investigated the material’s ability to absorb energy.

The team created materials that consisted of tilted beams of LCE, sandwiched between stiff supporting structures. This basic unit was repeated over the material in multiple layers, so that they would buckle at different rates on impact, dissipating the energy effectively.

Applying machine learning techniques to its rule-based security code scanning capabilities, GitHub hopes to be able to extend them to less common vulnerability patterns by automatically inferring new rules from the existing ones.

GitHub Code Scanning uses carefully defined CodeQL analysis rules to identify potential security vulnerabilities lurking in source code.

In the ongoing effort to scale AI systems without incurring prohibitively high training and compute costs, sparse mixture-of-expert models (MoE) have shown their potential for achieving impressive neural network pretraining speedups by dynamically selecting only the related parameters for each input. This enables such networks to vastly expand their parameters while keeping their FLOPs per token (compute) roughly constant. Advancing MoE models to state-of-the-art performance has however been hindered by training instabilities and uncertain quality during fine-tuning.

To address these issues, a research team from Google AI and Google Brain has published a set of guidelines for designing more practical and reliable sparse expert models. The team tested their recommendations by pretraining a 269B sparse model, which it says is the first to achieve state-of-the-art results on natural language processing (NLP) benchmarks.

The team summarizes their main contributions as:

Artificial Intelligence has made tremendous progress these past few years, but what are the biggest AI Researchers expecting Artificial Intelligence to look like in the year 2030. They’ve made some amazing futurism predictions on all the amazing human abilities and efficiencies these AI’s will likely have. Human level AI will likely be a thing and other technology predictions like the metaverse will likely turn out to be true aswell.

TIMESTAMPS:
00:00 The Future of Artificial Intelligence.
00:53 Artificial Intelligence Predictions for 2030
02:15 The Metaverse in 2030
04:00 Other Technology Predictions for 2030
06:41 Last Words.

#future #ai #2030