Toggle light / dark theme

In recent years, engineers and computer scientists have created a wide range of technological tools that can enhance fitness training experiences, including smart watches, fitness trackers, sweat-resistant earphones or headphones, smart home gym equipment and smartphone applications. New state-of-the-art computational models, particularly deep learning algorithms, have the potential to improve these tools further, so that they can better meet the needs of individual users.

Researchers at University of Brescia in Italy have recently developed a computer vision system for a smart mirror that could improve the effectiveness of fitness training both in home and gym environments. This system, introduced in a paper published by the International Society of Biomechanics in Sports, is based on a deep learning algorithm trained to recognize human gestures in video recordings.

“Our commercial partner ABHorizon invented the concept of a product that can guide and teach you during your personal fitness training,” Bernardo Lanza, one of the researchers who carried out the study, told TechXplore. “This device can show you the best way to train based on your specific needs. To develop this device further, they asked us to investigate the viability of an integrated vision system for exercise evaluation.”

How can mobile robots perceive and understand the environment correctly, even if parts of the environment are occluded by other objects? This is a key question that must be solved for self-driving vehicles to safely navigate in large crowded cities. While humans can imagine complete physical structures of objects even when they are partially occluded, existing artificial intelligence (AI) algorithms that enable robots and self-driving vehicles to perceive their environment do not have this capability.

Robots with AI can already find their way around and navigate on their own once they have learned what their environment looks like. However, perceiving the entire structure of objects when they are partially hidden, such as people in crowds or vehicles in traffic jams, has been a significant challenge. A major step towards solving this problem has now been taken by Freiburg robotics researchers Prof. Dr. Abhinav Valada and Ph.D. student Rohit Mohan from the Robot Learning Lab at the University of Freiburg, which they have presented in two joint publications.

The two Freiburg scientists have developed the amodal panoptic segmentation task and demonstrated its feasibility using novel AI approaches. Until now, self-driving vehicles have used panoptic segmentation to understand their surroundings.

A team of researchers from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Stony Brook University have devised a new quantum algorithm to compute the lowest energies of molecules at specific configurations during chemical reactions, including when their chemical bonds are broken. As described in Physical Review Research, compared to similar existing algorithms, including the team’s previous method, the new algorithm will significantly improve scientists’ ability to accurately and reliably calculate the potential energy surface in reacting molecules.

For this work, Deyu Lu, a Center for Functional Nanomaterials (CFN) physicist at Brookhaven Lab, worked with Tzu-Chieh Wei, an associate professor specializing in at the C.N. Yang Institute for Theoretical Physics at Stony Brook University, Qin Wu, a theorist at CFN, and Hongye Yu, a Ph.D. student at Stony Brook.

“Understanding the quantum mechanics of a molecule, how it behaves at an atomic level, can provide key insight into its chemical properties, like its stability and reactivity,” said Lu.

Or so goes the theory. Most CIM chips running AI algorithms have solely focused on chip design, showcasing their capabilities using simulations of the chip rather than running tasks on full-fledged hardware. The chips also struggle to adjust to multiple different AI tasks—image recognition, voice perception—limiting their integration into smartphones or other everyday devices.

This month, a study in Nature upgraded CIM from the ground up. Rather than focusing solely on the chip’s design, the international team—led by neuromorphic hardware experts Dr. H.S. Philip Wong at Stanford and Dr. Gert Cauwenberghs at UC San Diego—optimized the entire setup, from technology to architecture to algorithms that calibrate the hardware.

The resulting NeuRRAM chip is a powerful neuromorphic computing behemoth with 48 parallel cores and 3 million memory cells. Extremely versatile, the chip tackled multiple AI standard tasks—such as reading hand-written numbers, identifying cars and other objects in images, and decoding voice recordings—with over 84 percent accuracy.

This places Drake in the company of towering physicists with equations named after them, including James Clerk Maxwell and Erwin Schrödinger. Unlike those, Drake’s equation does not encapsulate a law of nature. Instead, it combines some poorly known probabilities into an informed estimate.

Whatever reasonable values you feed into the equation (see image below), it is hard to avoid the conclusion that we shouldn’t be alone in the galaxy. Drake remained a proponent and a supporter of the search for extraterrestrial life throughout his days, but has his equation taught us anything?

Drake’s equation may look complicated, but its principles are rather simple. It states that in a galaxy as old as ours, the number of civilizations that are detectable by virtue of them broadcasting their presence must equate to the rate at which they arise, multiplied by their average lifetime.

What if humans were gods instead?? Join us… and find out more!

Subscribe for more from Unveiled ► https://wmojo.com/unveiled-subscribe.

In this video, Unveiled takes a closer look at one of the ultimate what if scenarios — what if humans became GODS? According to some predictions, science and technology will one day lead us to godlike power… so what will we do with that responsibility? Will we use it for good or for bad?

This is Unveiled, giving you incredible answers to extraordinary questions!

Find more amazing videos for your curiosity here:
Can Science Solve the God Equation? — https://youtu.be/YPgKH-adjik.
Are Ancient Civilisations Still Hidden on Earth? — https://youtu.be/cbtPJmJErmc.

0:00 Intro.

Choosing interesting dissertation topics in ML is the first choice of Master’s and Doctorate scholars nowadays. Ph.D. candidates are highly motivated to choose research topics that establish new and creative paths toward discovery in their field of study. Selecting and working on a dissertation topic in machine learning is not an easy task as machine learning uses statistical algorithms to make computers work in a certain way without being explicitly programmed. The main aim of machine learning is to create intelligent machines which can think and work like human beings. This article features the top 10 ML dissertations for Ph.D. students to try in 2022.

Text Mining and Text Classification: Text mining is an AI technology that uses NLP to transform the free text in documents and databases into normalized, structured data suitable for analysis or to drive ML algorithms. This is one of the best research and thesis topics for ML projects.

Recognition of Everyday Activities through Wearable Sensors and Machine Learning: The goal of the research detailed in this dissertation is to explore and develop accurate and quantifiable sensing and machine learning techniques for eventual real-time health monitoring by wearable device systems.

“If we get a similar hit rate in detecting texture in tumors, the potential for early diagnosis is huge,” says scientist.

Researchers at University College London.

The potentially early-stage fatal tumors in humans could be noticed by the new x-ray method that collaborates with a deep-learning Artificial Intelligence (AI) algorithm to detect explosives in luggages, according to a report published by MIT Technology Review on Friday.

“Neuromorphic computing could offer a compelling alternative to traditional AI accelerators by significantly improving power and data efficiency for more complex AI use cases, spanning data centers to extreme edge applications.”


Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Can computer systems develop to the point where they can think creatively, identify people or items they have never seen before, and adjust accordingly — all while working more efficiently, with less power? Intel Labs is betting on it, with a new hardware and software approach using neuromorphic computing, which, according to a recent blog post, “uses new algorithmic approaches that emulate how the human brain interacts with the world to deliver capabilities closer to human cognition.”

While this may sound futuristic, Intel’s neuromorphic computing research is already fostering interesting use cases, including how to add new voice interaction commands to Mercedes-Benz vehicles; create a robotic hand that delivers medications to patients; or develop chips that recognize hazardous chemicals.