Toggle light / dark theme

You are on PRO Robots Channel, and today we present you with high-tech news. An exhibition of robot chefs in Japan and novelties from the robot exhibition Automate 2022 in the USA, new unusual robots for space, the unexpected discovery of a robot that visited the asteroid Bennu, and the first Italian humanoid robot. All the most interesting technology news in one issue!

#prorobots #robots #robot #futuretechnologies #robotics.

More interesting and useful content:
✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.
https://www.facebook.com/PRO.Robots.Info.

#prorobots #technology #roboticsnews.

Humans are good at looking at images and finding patterns or making comparisons. Look at a collection of dog photos, for example, and you can sort them by color, by ear size, by face shape, and so on. But could you compare them quantitatively? And perhaps more intriguingly, could a machine extract meaningful information from images that humans can’t?

Now a team of Standford University’s Chan Zuckerberg Biohub scientists has developed a machine learning method to quantitatively analyze and compare images—in this case microscopy images of proteins—with no prior knowledge. As reported in Nature Methods, their algorithm, dubbed “cytoself,” provides rich, detailed information on location and function within a cell. This capability could quicken research time for cell biologists and eventually be used to accelerate the process of drug discovery and drug screening.

“This is very exciting—we’re applying AI to a new kind of problem and still recovering everything that humans know, plus more,” said Loic Royer, co-corresponding author of the study. “In the future we could do this for different kinds of images. It opens up a lot of possibilities.”

Researchers have been trying to build artificial synapses for years in the hope of getting close to the unrivaled computational performance of the human brain. A new approach has now managed to design ones that are 1,000 times smaller and 10,000 times faster than their biological counterparts.

Despite the runaway success of deep learning over the past decade, this brain-inspired approach to AI faces the challenge that it is running on hardware that bears little resemblance to real brains. This is a big part of the reason why a human brain weighing just three pounds can pick up new tasks in seconds using the same amount of power as a light bulb, while training the largest neural networks takes weeks, megawatt hours of electricity, and racks of specialized processors.

That’s prompting growing interest in efforts to redesign the underlying hardware AI runs on. The idea is that by building computer chips whose components act more like natural neurons and synapses, we might be able to approach the extreme space and energy efficiency of the human brain. The hope is that these so-called “neuromorphic” processors could be much better suited to running AI than today’s computer chips.

In medicine, a prosthesis, or a prosthetic implant, is an artificial device that replaces a missing body part, which may be lost through trauma, disease, or a condition present at birth. A pioneering project to develop advanced pressure sensors for use in robotic systems could transform prosthetics and robotic limbs. The innovative research project aspires to develop sensors that provide enhanced capabilities to robots, helping improve their motor skills and dexterity, through the use of highly accurate pressure sensors that provide haptic feedback and distributed touch.

It is led by the University of the West of Scotland (UWS), Integrated Graphene Ltd, and supported by the Scottish Research Partnership in Engineering (SRPe) and the National Manufacturing Institute for Scotland (NMIS) Industry Doctorate Programme in Advanced Manufacturing. This is not for the first time when the team of highly talented researchers have decided to bring the much needed transformative change in prosthetics and robotic limbs.

The human brain relies on a constant stream of tactile information to carry out basic tasks, like holding a cup of coffee. Yet some of the most advanced motorized limbs — including those controlled solely by a person’s thoughts — don’t provide this sort of feedback. As a result, even state-of-the-art prosthetics can often frustrate their users.

The greatest artistic tool ever built, or a harbinger of doom for entire creative industries? OpenAI’s second-generation DALL-E 2 system is slowly opening up to the public, and its text-based image generation and editing abilities are awe-inspiring.

The pace of progress in the field of AI-powered text-to-image generation is positively frightening. The generative adversarial network, or GAN, first emerged in 2014, putting forth the idea of two AIs in competition with one another, both “trained” by being shown a huge number of real images, labeled to help the algorithms learn what they’re looking at. A “generator” AI then starts to create images, and a “discriminator” AI tries to guess if they’re real images or AI creations.

At first, they’re evenly matched, both being absolutely terrible at their jobs. But they learn; the generator is rewarded if it fools the discriminator, and the discriminator is rewarded if it correctly picks the origin of an image. Over millions and billions of iterations – each taking a matter of seconds – they improve to the point where humans start struggling to tell the difference.

So, Artificial intelligence predicts selfies would dominate, ghoulish humans, holding mobiles, at the end of the earth, an event that would destroy every sign of life. Indeed, it is hypothetical and difficult to imagine the situation. An AI image generator, Midjourney, an obscure but close associate of Open AI, imagined a few of them revealing how scary they can be. Shared by a tik-tok account, @Robot Overloads, the images were hellish in tone and gory in substance. The images generated depict disfigured human beings with eyes as big as rat holes and fingers long enough to scoop out curdled blood from creatures of another world. These frames artificial intelligence has generated go beyond the portrayal of annihilation. Firstly, they are cut off from reality, and secondly, they are very few. The end of the world is billion years away when selfies would become a fossilized concept and humans are considered biological ancestors of cyborgs.

The pictures are stunning though in the sense that the elements like huge explosions going off in the background while a man maniacally staring into the camera are included in one frame. The imaginative spark of artificial intelligence should really be appreciated here. Perhaps it must have taken a hint or two from images of people taking selfies in the backdrop of accidents and natural calamities, to use them as click baits. Apparently, image generators give the users the power to visualize their imagination, how much ever removed from reality. However, the netizens are finding them captivating pleasantly, so much so that one of them wonders if they are from nibiru or planet X theories!! That one tik-tok video has got more than 12.7 million views and the reply, “OK no more sleeping,” posted by a Tik Tok user summarises, more than anything, the superficiality of melodramatic AI’s image generating capability.

TuSimple, a transportation company focusing on driverless tech for trucks, recently transported a load of products with its autonomous truck systems.


The road to fully autonomous trucks is a long and winding one, but it’s not an impossible one, and it seems to be in closer reach than fully self-driving cars.

The company in charge of the feat was TuSimple, a transportation company focusing on driverless tech for trucks. Eighty percent of the journey, or 950 miles (1,528 km), was driven by the autonomous system, with a human at the wheel for the other 20 percent of the cross-country trip, and at-the-ready to take over the wheel if anything faulted with the technology.

These include aquatic drones that can be programmed to scoop up floating debris from the surface of rivers, and buggies that use artificial intelligence (AI) to search for and pick up litter for use on beaches.

Scientists are also hoping to scale up the use of magnetic nano-scale springs that hook on to microplastics and break them down.

MailOnline takes a closer a look at some of the technologies currently being used to reduce the man-made debris in our oceans, and those that are still in development.

Teams of mobile robots could be highly effective in helping humans to complete straining manual tasks, such as manufacturing processes or the transportation of heavy objects. In recent years, some of these robots have already been tested and introduced in real-world settings, attaining very promising results.

Researchers at Northwestern University’s Center for Robotics and Biosystems have recently developed new collaborative , dubbed Omnid Mocobots. These robots, introduced in a paper pre-published on arXiv, are designed to cooperate with each other and with humans to safely pick up, handle, and transport delicate and flexible payloads.

“The Center for Robotics and Biosystems has a long history building robots that collaborate physically with humans,” Matthew Elwin, one of the researchers who carried out the study, told TechXplore. “In fact, the term ‘cobots’ was coined here. The inspiration for the current work was manufacturing, warehouse, and construction tasks involving manipulating large, articulated, or flexible objects, where it is helpful to have several robots supporting the object.”

🤔 I certainly hope not!


An artificial intelligence program asked to predict what “the last selfie ever taken” would look like resulted in several nightmarish images.

TikTok account Robot Overloards, which dedicates its page to providing viewers with “daily disturbing AI generated images,” uploaded a video on Sunday where the AI DALL-E was asked to predict what the last selfies on Earth would look like.

The images produced showed bloody, mutilated humans taking selfies amongst apocalyptic scenes. One “selfie” shows a skeleton-like man holding the camera for a selfie with dark hills on fire and smoke in the air behind him.