Toggle light / dark theme

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Artificial intelligence (AI) has been steadily influencing business processes, automating repetitive and mundane tasks even for complex industries like construction and medicine.

While AI applications often work beneath the surface, AI-based content generators are front and center as businesses try to keep up with the increased demand for original content. However, creating content takes time, and producing high-quality material regularly can be difficult. For that reason, AI continues to find its way into creative business processes like content marketing to alleviate such problems.

Cloostermans will become part of Amazon Robotics, Amazon’s division focused on automating aspects of its warehouse operations. The unit was formed after Amazon acquired Kiva Systems, a manufacturer of warehouse robots, for $775 million a decade ago.

Amazon continues to launch new machines in warehouses. In June, the company unveiled a package ferrying machine called Proteus, which it referred to as its first fully autonomous mobile robot. It’s also deployed other robots that can help sort and move packages.

In a blog post, Ian Simpson, vice president of Global Robotics at Amazon, said the company is investing in robotics and other technology to make its warehouses safer for employees.

Vocal biomarkers have become a buzzword during the pandemic, but what does it mean and how could it contribute to diagnostics?

What if a disease could be identified over a phone call?

Vocal biomarkers have amazing potential in reforming diagnostics. As certain diseases, like those affecting the heart, lungs, vocal folds or the brain can alter a person’s voice, artificial intelligence (A.I.)-based voice analyses provide new horizons in medicine.

Using biomarkers for diagnosis and remote monitoring can also be used for COVID-screening. So is it possible to diagnose illnesses from the sound of your voice?

At this year’s Conference on Machine Learning and Systems (MLSys), we and our colleagues presented a new auto-scheduler called DietCode, which handles dynamic-shape workloads much more efficiently than its predecessors. Where existing auto-encoders have to optimize each possible shape individually, DietCode constructs a shape-generic search space that enables it to optimize all possible shapes simultaneously.

We tested our approach on a natural-language-processing (NLP) task that could take inputs ranging in size from 1 to 128 tokens. When we use a random sampling of input sizes that reflects a plausible real-world distribution, we speed up the optimization process almost sixfold relative to the best prior auto-scheduler. That speedup increases to more than 94-fold when we consider all possible shapes.

Despite being much faster, DietCode also improves the performance of the resulting code, by up to 70% relative to prior auto-schedulers and up to 19% relative to hand-optimized code in existing tensor operation libraries. It thus promises to speed up our customers’ dynamic-shaped machine learning workloads.

AI image generation is here in a big way. A newly released open source image synthesis model called Stable Diffusion allows anyone with a PC and a decent GPU to conjure up almost any visual reality they can imagine. It can imitate virtually any visual style, and if you feed it a descriptive phrase, the results appear on your screen like magic.

Some artists are delighted by the prospect, others aren’t happy about it, and society at large still seems largely unaware of the rapidly evolving tech revolution taking place through communities on Twitter, Discord, and Github. Image synthesis arguably brings implications as big as the invention of the camera—or perhaps the creation of visual art itself. Even our sense of history might be at stake, depending on how things shake out. Either way, Stable Diffusion is leading a new wave of deep learning creative tools that are poised to revolutionize the creation of visual media.

Stable Diffusion is a powerful open-source image AI that competes with OpenAI’s DALL-E 2. The AI training was probably rather cheap in comparison.

Anyone interested can download the model of the open-source image AI Stable Diffusion for free from Github and run it locally on a compatible graphics card. This must be reasonably powerful (at least 5.1 GB VRAM), but you don’t need a high-end computer.

In addition to the local, free version, the Stable Diffusion team also offers access via a web interface. For about $12, you get roughly 1,000 image prompts.

Researchers at Oxford University’s Department of Computer Science, in collaboration with colleagues from Bogazici University, Turkey, have developed a novel artificial intelligence (AI) system to enable autonomous vehicles (AVs) achieve safer and more reliable navigation capability, especially under adverse weather conditions and GPS-denied driving scenarios. The results have been published today in Nature Machine Intelligence.

Yasin Almalioglu, who completed the research as part of his DPhil in the Department of Computer Science, said, “The difficulty for AVs to achieve precise positioning during challenging is a major reason why these have been limited to relatively small-scale trials up to now. For instance, weather such as rain or snow may cause an AV to detect itself in the wrong lane before a turn, or to stop too late at an intersection because of imprecise positioning.”

To overcome this problem, Almalioglu and his colleagues developed a novel, self-supervised for ego-motion estimation, a crucial component of an AV’s driving system that estimates the car’s moving position relative to objects observed from the car itself. The model brought together richly-detailed information from visual sensors (which can be disrupted by adverse conditions) with data from weather-immune sources (such as radar), so that the benefits of each can be used under different weather conditions.

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

The World Robot Conference 2022 was held in Beijing. Due to the ongoing offline pandemic, only Chinese robotics companies were represented, and the rest of the world joined in the online format. But the Chinese booths were also, as always, a lot to see. We gathered for you all the most interesting things from the largest robot exhibition in one video!

0:00 Intro.
0:30 Chinese robotics market.
1:06 EX Robots.
2:38 Dancing humanoid robot.
3:37 Unitree Robotics.
4:55 Underwater bionic robot.
5:23 Bionic arm and anthropomorphic robot.
5:43 Mobile two-wheeled robot.
6:40 Industrial robots.
7:04 Reconnaissance Robot.
8:05 Logistics Solutions.
9:31 Intelligent Platform.
10:03 Robot++
10:41 Robots in Medicine.
10:58 PCR tests with robots.
11:16 Robotic surgical system.
#prorobots #robots #robot #futuretechnologies #robotics.

More interesting and useful content: