Toggle light / dark theme

Vocal biomarkers have become a buzzword during the pandemic, but what does it mean and how could it contribute to diagnostics?

What if a disease could be identified over a phone call?

Vocal biomarkers have amazing potential in reforming diagnostics. As certain diseases, like those affecting the heart, lungs, vocal folds or the brain can alter a person’s voice, artificial intelligence (A.I.)-based voice analyses provide new horizons in medicine.

Using biomarkers for diagnosis and remote monitoring can also be used for COVID-screening. So is it possible to diagnose illnesses from the sound of your voice?

At this year’s Conference on Machine Learning and Systems (MLSys), we and our colleagues presented a new auto-scheduler called DietCode, which handles dynamic-shape workloads much more efficiently than its predecessors. Where existing auto-encoders have to optimize each possible shape individually, DietCode constructs a shape-generic search space that enables it to optimize all possible shapes simultaneously.

We tested our approach on a natural-language-processing (NLP) task that could take inputs ranging in size from 1 to 128 tokens. When we use a random sampling of input sizes that reflects a plausible real-world distribution, we speed up the optimization process almost sixfold relative to the best prior auto-scheduler. That speedup increases to more than 94-fold when we consider all possible shapes.

Despite being much faster, DietCode also improves the performance of the resulting code, by up to 70% relative to prior auto-schedulers and up to 19% relative to hand-optimized code in existing tensor operation libraries. It thus promises to speed up our customers’ dynamic-shaped machine learning workloads.

AI image generation is here in a big way. A newly released open source image synthesis model called Stable Diffusion allows anyone with a PC and a decent GPU to conjure up almost any visual reality they can imagine. It can imitate virtually any visual style, and if you feed it a descriptive phrase, the results appear on your screen like magic.

Some artists are delighted by the prospect, others aren’t happy about it, and society at large still seems largely unaware of the rapidly evolving tech revolution taking place through communities on Twitter, Discord, and Github. Image synthesis arguably brings implications as big as the invention of the camera—or perhaps the creation of visual art itself. Even our sense of history might be at stake, depending on how things shake out. Either way, Stable Diffusion is leading a new wave of deep learning creative tools that are poised to revolutionize the creation of visual media.

Stable Diffusion is a powerful open-source image AI that competes with OpenAI’s DALL-E 2. The AI training was probably rather cheap in comparison.

Anyone interested can download the model of the open-source image AI Stable Diffusion for free from Github and run it locally on a compatible graphics card. This must be reasonably powerful (at least 5.1 GB VRAM), but you don’t need a high-end computer.

In addition to the local, free version, the Stable Diffusion team also offers access via a web interface. For about $12, you get roughly 1,000 image prompts.

Researchers at Oxford University’s Department of Computer Science, in collaboration with colleagues from Bogazici University, Turkey, have developed a novel artificial intelligence (AI) system to enable autonomous vehicles (AVs) achieve safer and more reliable navigation capability, especially under adverse weather conditions and GPS-denied driving scenarios. The results have been published today in Nature Machine Intelligence.

Yasin Almalioglu, who completed the research as part of his DPhil in the Department of Computer Science, said, “The difficulty for AVs to achieve precise positioning during challenging is a major reason why these have been limited to relatively small-scale trials up to now. For instance, weather such as rain or snow may cause an AV to detect itself in the wrong lane before a turn, or to stop too late at an intersection because of imprecise positioning.”

To overcome this problem, Almalioglu and his colleagues developed a novel, self-supervised for ego-motion estimation, a crucial component of an AV’s driving system that estimates the car’s moving position relative to objects observed from the car itself. The model brought together richly-detailed information from visual sensors (which can be disrupted by adverse conditions) with data from weather-immune sources (such as radar), so that the benefits of each can be used under different weather conditions.

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

The World Robot Conference 2022 was held in Beijing. Due to the ongoing offline pandemic, only Chinese robotics companies were represented, and the rest of the world joined in the online format. But the Chinese booths were also, as always, a lot to see. We gathered for you all the most interesting things from the largest robot exhibition in one video!

0:00 Intro.
0:30 Chinese robotics market.
1:06 EX Robots.
2:38 Dancing humanoid robot.
3:37 Unitree Robotics.
4:55 Underwater bionic robot.
5:23 Bionic arm and anthropomorphic robot.
5:43 Mobile two-wheeled robot.
6:40 Industrial robots.
7:04 Reconnaissance Robot.
8:05 Logistics Solutions.
9:31 Intelligent Platform.
10:03 Robot++
10:41 Robots in Medicine.
10:58 PCR tests with robots.
11:16 Robotic surgical system.
#prorobots #robots #robot #futuretechnologies #robotics.

More interesting and useful content:

The very first industrial revolution historically kicked off with the introduction of steam-and water-powered technology. We have come a long way since then, with the current fourth industrial revolution, or Industry 4.0, being focused on utilizing new technology to boost industrial efficiency.

Some of these technologies include the internet of things (IoT), cloud computing, cyber-physical systems, and artificial intelligence (AI). AI is the key driver of Industry 4.0, automating to self-monitor, interpret, diagnose, and analyze all by themselves. AI methods, such as machine learning (ML), (DL), processing (NLP), and computer vision (CV), help industries forecast their maintenance needs and cut down on downtime.

However, to ensure the smooth, stable deployment and integration of AI-based systems, the actions and results of these systems must be made comprehensible, or, in other words, “explainable” to experts. In this regard, explainable AI (XAI) focuses on developing algorithms that produce human-understandable results made by AI-based systems. Thus, XAI deployment is useful in Industry 4.0.

Researchers have found a better way to reduce gender bias in natural language processing models while preserving vital information about the meanings of words, according to a recent study that could be a key step toward addressing the issue of human biases creeping into artificial intelligence.

While a computer itself is an unbiased machine, much of the data and programming that flows through computers is generated by humans. This can be a problem when conscious or unconscious human biases end up being reflected in the text samples AI models use to analyze and “understand” language.

Computers aren’t immediately able to understand text, explains Lei Ding, first author on the study and graduate student in the Department of Mathematical and Statistical Sciences. They need words to be converted into a set of numbers to understand them—a process called word embedding.

In 1998 I was exposed to the term “disruptive innovation” for the first time. I read a wonderful book, “The Innovator’s Dilemma” by Clayton Christensen, where I learned the difference between incremental innovation and disruptive innovation. He analyzed the hard drive industry and showed that while many companies were trying to increase the capacity of the drives, other companies changed the form factor and made the drives smaller. This resulted in disruptive progress in the industry. We also recently witnessed dramatic advances in artificial intelligence (AI), where in 2013/2014 AI systems started outperforming humans in image recognition.


Identifying novel targets and designing novel molecules is not the only way to innovate in the biopharmaceutical industry. Sometimes, innovation in delivery systems for well-know and established therapeutics may be just as disruptive as the new targeted medicine.

Nuro, a Softbank-backed developer of street-legal autonomous, electric delivery vehicles, has struck a long-term partnership with Uber to use its toaster-shaped micro-vans to haul food orders, groceries and other goods to customers in Silicon Valley and Houston using the Uber Eats service starting this year.

People using the Uber Eats app in Houston and Mountain View, California (where Nuro is based) will be able to order deliveries using the new autonomous service this fall, with plans to expand the program to other parts of the San Francisco Bay Area in the months ahead, the companies said.


The SoftBank-backed developer of street-legal autonomous, electric vehicles, has a long-term partnership with Uber to use its toaster-shaped micro-vans to haul food orders, groceries and other goods in Silicon Valley and Houston.