Toggle light / dark theme

Researchers have been trying to build artificial synapses for years in the hope of getting close to the unrivaled computational performance of the human brain. A new approach has now managed to design ones that are 1,000 times smaller and 10,000 times faster than their biological counterparts.

Despite the runaway success of deep learning over the past decade, this brain-inspired approach to AI faces the challenge that it is running on hardware that bears little resemblance to real brains. This is a big part of the reason why a human brain weighing just three pounds can pick up new tasks in seconds using the same amount of power as a light bulb, while training the largest neural networks takes weeks, megawatt hours of electricity, and racks of specialized processors.

That’s prompting growing interest in efforts to redesign the underlying hardware AI runs on. The idea is that by building computer chips whose components act more like natural neurons and synapses, we might be able to approach the extreme space and energy efficiency of the human brain. The hope is that these so-called “neuromorphic” processors could be much better suited to running AI than today’s computer chips.

In medicine, a prosthesis, or a prosthetic implant, is an artificial device that replaces a missing body part, which may be lost through trauma, disease, or a condition present at birth. A pioneering project to develop advanced pressure sensors for use in robotic systems could transform prosthetics and robotic limbs. The innovative research project aspires to develop sensors that provide enhanced capabilities to robots, helping improve their motor skills and dexterity, through the use of highly accurate pressure sensors that provide haptic feedback and distributed touch.

It is led by the University of the West of Scotland (UWS), Integrated Graphene Ltd, and supported by the Scottish Research Partnership in Engineering (SRPe) and the National Manufacturing Institute for Scotland (NMIS) Industry Doctorate Programme in Advanced Manufacturing. This is not for the first time when the team of highly talented researchers have decided to bring the much needed transformative change in prosthetics and robotic limbs.

The human brain relies on a constant stream of tactile information to carry out basic tasks, like holding a cup of coffee. Yet some of the most advanced motorized limbs — including those controlled solely by a person’s thoughts — don’t provide this sort of feedback. As a result, even state-of-the-art prosthetics can often frustrate their users.

The greatest artistic tool ever built, or a harbinger of doom for entire creative industries? OpenAI’s second-generation DALL-E 2 system is slowly opening up to the public, and its text-based image generation and editing abilities are awe-inspiring.

The pace of progress in the field of AI-powered text-to-image generation is positively frightening. The generative adversarial network, or GAN, first emerged in 2014, putting forth the idea of two AIs in competition with one another, both “trained” by being shown a huge number of real images, labeled to help the algorithms learn what they’re looking at. A “generator” AI then starts to create images, and a “discriminator” AI tries to guess if they’re real images or AI creations.

At first, they’re evenly matched, both being absolutely terrible at their jobs. But they learn; the generator is rewarded if it fools the discriminator, and the discriminator is rewarded if it correctly picks the origin of an image. Over millions and billions of iterations – each taking a matter of seconds – they improve to the point where humans start struggling to tell the difference.

So, Artificial intelligence predicts selfies would dominate, ghoulish humans, holding mobiles, at the end of the earth, an event that would destroy every sign of life. Indeed, it is hypothetical and difficult to imagine the situation. An AI image generator, Midjourney, an obscure but close associate of Open AI, imagined a few of them revealing how scary they can be. Shared by a tik-tok account, @Robot Overloads, the images were hellish in tone and gory in substance. The images generated depict disfigured human beings with eyes as big as rat holes and fingers long enough to scoop out curdled blood from creatures of another world. These frames artificial intelligence has generated go beyond the portrayal of annihilation. Firstly, they are cut off from reality, and secondly, they are very few. The end of the world is billion years away when selfies would become a fossilized concept and humans are considered biological ancestors of cyborgs.

The pictures are stunning though in the sense that the elements like huge explosions going off in the background while a man maniacally staring into the camera are included in one frame. The imaginative spark of artificial intelligence should really be appreciated here. Perhaps it must have taken a hint or two from images of people taking selfies in the backdrop of accidents and natural calamities, to use them as click baits. Apparently, image generators give the users the power to visualize their imagination, how much ever removed from reality. However, the netizens are finding them captivating pleasantly, so much so that one of them wonders if they are from nibiru or planet X theories!! That one tik-tok video has got more than 12.7 million views and the reply, “OK no more sleeping,” posted by a Tik Tok user summarises, more than anything, the superficiality of melodramatic AI’s image generating capability.

TuSimple, a transportation company focusing on driverless tech for trucks, recently transported a load of products with its autonomous truck systems.


The road to fully autonomous trucks is a long and winding one, but it’s not an impossible one, and it seems to be in closer reach than fully self-driving cars.

The company in charge of the feat was TuSimple, a transportation company focusing on driverless tech for trucks. Eighty percent of the journey, or 950 miles (1,528 km), was driven by the autonomous system, with a human at the wheel for the other 20 percent of the cross-country trip, and at-the-ready to take over the wheel if anything faulted with the technology.

These include aquatic drones that can be programmed to scoop up floating debris from the surface of rivers, and buggies that use artificial intelligence (AI) to search for and pick up litter for use on beaches.

Scientists are also hoping to scale up the use of magnetic nano-scale springs that hook on to microplastics and break them down.

MailOnline takes a closer a look at some of the technologies currently being used to reduce the man-made debris in our oceans, and those that are still in development.

Teams of mobile robots could be highly effective in helping humans to complete straining manual tasks, such as manufacturing processes or the transportation of heavy objects. In recent years, some of these robots have already been tested and introduced in real-world settings, attaining very promising results.

Researchers at Northwestern University’s Center for Robotics and Biosystems have recently developed new collaborative , dubbed Omnid Mocobots. These robots, introduced in a paper pre-published on arXiv, are designed to cooperate with each other and with humans to safely pick up, handle, and transport delicate and flexible payloads.

“The Center for Robotics and Biosystems has a long history building robots that collaborate physically with humans,” Matthew Elwin, one of the researchers who carried out the study, told TechXplore. “In fact, the term ‘cobots’ was coined here. The inspiration for the current work was manufacturing, warehouse, and construction tasks involving manipulating large, articulated, or flexible objects, where it is helpful to have several robots supporting the object.”

🤔 I certainly hope not!


An artificial intelligence program asked to predict what “the last selfie ever taken” would look like resulted in several nightmarish images.

TikTok account Robot Overloards, which dedicates its page to providing viewers with “daily disturbing AI generated images,” uploaded a video on Sunday where the AI DALL-E was asked to predict what the last selfies on Earth would look like.

The images produced showed bloody, mutilated humans taking selfies amongst apocalyptic scenes. One “selfie” shows a skeleton-like man holding the camera for a selfie with dark hills on fire and smoke in the air behind him.

You’re in for a surprise.

Picture the ocean, impacted by climate change.

Rising sea levels, ocean acidification, melting of ice sheets, flooded coastlines, and shrinking fish stocks — the image is largely negative. For the longest time, the ocean has been portrayed as a victim of climate change, and rightly so. Ulf Riebesell, Professor of Biological Oceanography at the Geomar Helmholtz Centre for Ocean Research Kiel, has studied the effects of global warming on the ocean for nearly 15 years, warning the scientific community about the impacts of climate change on ocean life and biochemical cycles. countries aiming to achieve a climate-neutral world by mid-century, experts have decided to include the ocean to tackle climate change.

Many efforts have been made to image the spatiotemporal electrical activity of the brain with the purpose of mapping its function and dysfunction as well as aiding the management of brain disorders. Here, we propose a non-conventional deep learning–based source imaging framework (DeepSIF) that provides robust and precise spatiotemporal estimates of underlying brain dynamics from noninvasive high-density electroencephalography (EEG) recordings. DeepSIF employs synthetic training data generated by biophysical models capable of modeling mesoscale brain dynamics. The rich characteristics of underlying brain sources are embedded in the realistic training data and implicitly learned by DeepSIF networks, avoiding complications associated with explicitly formulating and tuning priors in an optimization problem, as often is the case in conventional source imaging approaches. The performance of DeepSIF is evaluated by 1) a series of numerical experiments, 2) imaging sensory and cognitive brain responses in a total of 20 healthy subjects from three public datasets, and 3) rigorously validating DeepSIF’s capability in identifying epileptogenic regions in a cohort of 20 drug-resistant epilepsy patients by comparing DeepSIF results with invasive measurements and surgical resection outcomes. DeepSIF demonstrates robust and excellent performance, producing results that are concordant with common neuroscience knowledge about sensory and cognitive information processing as well as clinical findings about the location and extent of the epileptogenic tissue and outperforming conventional source imaging methods. The DeepSIF method, as a data-driven imaging framework, enables efficient and effective high-resolution functional imaging of spatiotemporal brain dynamics, suggesting its wide applicability and value to neuroscience research and clinical applications.