Toggle light / dark theme

A team of engineers from the University of California San Diego has unveiled a prototype four-legged soft robot that doesn’t need any electronics to work. The robot only needs a constant source of pressurized air for all its functions, including its controls and locomotion systems.

Most soft robots are powered by pressurized air and are controlled by electronic circuits. This approach works, but it requires complex components, like valves and pumps driven by actuators, which do not always fit inside the robot’s body.

In contrast, this new prototype is controlled by a lightweight, low-cost system of pneumatic circuits, consisting of flexible tubes and soft valves, onboard the robot itself. The robot can walk on command or in response to signals it detects from the environment.

We see that text data is ubiquitous in nature. There is a lot of text present in different forms such as posts, books, articles, and blogs. What is more interesting is the fact that there is a subset of Artificial Intelligence called Natural Language Processing (NLP) that would convert text into a form that could be used for machine learning. I know that sounds a lot but getting to know the details and the proper implementation of machine learning algorithms could ensure that one learns the important tools in the process.

Since the r e are newer and better libraries being created to be used for machine learning purposes, it would make sense to learn some of the state-of-the-art tools that could be used for predictions. I’ve recently come across a challenge on Kaggle about predicting the difficulty of the text.

The output variable, the difficulty of the text, is converted into a form that is continuous in nature. This makes the target variable continuous. Therefore, various regression techniques must be used for predicting the difficulty of the text. Since the text is ubiquitous in nature, applying the right processing mechanisms and predictions would be really valuable, especially for companies that receive feedback and reviews in the form of text.

2021 was an eventful year for AI. With the advent of new techniques, robust systems that can understand the relationships not only between words but words and photos, videos, and audio became possible. At the same time, policymakers — growing increasingly wary of AI’s potential harm — proposed rules aimed at mitigating the worst of AI’s effects, including discrimination.

Meanwhile, AI research labs — while signaling their adherence to “responsible AI” — rushed to commercialize their work, either under pressure from corporate parents or investors. But in a bright spot, organizations ranging from the U.S. National Institutes of Standards and Technology (NIST) to the United Nations released guidelines laying the groundwork for more explainable AI, emphasizing the need to move away from “black-box” systems in favor of those whose reasoning is transparent.

As for what 2022 might hold, the renewed focus on data engineering — designing the datasets used to train, test, and benchmark AI systems — that emerged in 2021 seems poised to remain strong. Innovations in AI accelerator hardware are another shoo-in for the year to come, as is a climb in the uptake of AI in the enterprise.