Toggle light / dark theme

These include aquatic drones that can be programmed to scoop up floating debris from the surface of rivers, and buggies that use artificial intelligence (AI) to search for and pick up litter for use on beaches.

Scientists are also hoping to scale up the use of magnetic nano-scale springs that hook on to microplastics and break them down.

MailOnline takes a closer a look at some of the technologies currently being used to reduce the man-made debris in our oceans, and those that are still in development.

Teams of mobile robots could be highly effective in helping humans to complete straining manual tasks, such as manufacturing processes or the transportation of heavy objects. In recent years, some of these robots have already been tested and introduced in real-world settings, attaining very promising results.

Researchers at Northwestern University’s Center for Robotics and Biosystems have recently developed new collaborative , dubbed Omnid Mocobots. These robots, introduced in a paper pre-published on arXiv, are designed to cooperate with each other and with humans to safely pick up, handle, and transport delicate and flexible payloads.

“The Center for Robotics and Biosystems has a long history building robots that collaborate physically with humans,” Matthew Elwin, one of the researchers who carried out the study, told TechXplore. “In fact, the term ‘cobots’ was coined here. The inspiration for the current work was manufacturing, warehouse, and construction tasks involving manipulating large, articulated, or flexible objects, where it is helpful to have several robots supporting the object.”

🤔 I certainly hope not!


An artificial intelligence program asked to predict what “the last selfie ever taken” would look like resulted in several nightmarish images.

TikTok account Robot Overloards, which dedicates its page to providing viewers with “daily disturbing AI generated images,” uploaded a video on Sunday where the AI DALL-E was asked to predict what the last selfies on Earth would look like.

The images produced showed bloody, mutilated humans taking selfies amongst apocalyptic scenes. One “selfie” shows a skeleton-like man holding the camera for a selfie with dark hills on fire and smoke in the air behind him.

You’re in for a surprise.

Picture the ocean, impacted by climate change.

Rising sea levels, ocean acidification, melting of ice sheets, flooded coastlines, and shrinking fish stocks — the image is largely negative. For the longest time, the ocean has been portrayed as a victim of climate change, and rightly so. Ulf Riebesell, Professor of Biological Oceanography at the Geomar Helmholtz Centre for Ocean Research Kiel, has studied the effects of global warming on the ocean for nearly 15 years, warning the scientific community about the impacts of climate change on ocean life and biochemical cycles. countries aiming to achieve a climate-neutral world by mid-century, experts have decided to include the ocean to tackle climate change.

Many efforts have been made to image the spatiotemporal electrical activity of the brain with the purpose of mapping its function and dysfunction as well as aiding the management of brain disorders. Here, we propose a non-conventional deep learning–based source imaging framework (DeepSIF) that provides robust and precise spatiotemporal estimates of underlying brain dynamics from noninvasive high-density electroencephalography (EEG) recordings. DeepSIF employs synthetic training data generated by biophysical models capable of modeling mesoscale brain dynamics. The rich characteristics of underlying brain sources are embedded in the realistic training data and implicitly learned by DeepSIF networks, avoiding complications associated with explicitly formulating and tuning priors in an optimization problem, as often is the case in conventional source imaging approaches. The performance of DeepSIF is evaluated by 1) a series of numerical experiments, 2) imaging sensory and cognitive brain responses in a total of 20 healthy subjects from three public datasets, and 3) rigorously validating DeepSIF’s capability in identifying epileptogenic regions in a cohort of 20 drug-resistant epilepsy patients by comparing DeepSIF results with invasive measurements and surgical resection outcomes. DeepSIF demonstrates robust and excellent performance, producing results that are concordant with common neuroscience knowledge about sensory and cognitive information processing as well as clinical findings about the location and extent of the epileptogenic tissue and outperforming conventional source imaging methods. The DeepSIF method, as a data-driven imaging framework, enables efficient and effective high-resolution functional imaging of spatiotemporal brain dynamics, suggesting its wide applicability and value to neuroscience research and clinical applications.

AI-powered predictions of the three-dimensional structures of nearly all cataloged proteins known to science have been made by DeepMind and EMBL’s European Bioinformatics Institute (EMBL-EBI). The catalog is freely and openly available to the scientific community, via the AlphaFold Protein Structure Database.

The two organizations hope the expanded database will continue to increase our understanding of biology, helping countless more scientists in their work as they strive to tackle global challenges.

This major milestone marks the database being expanded by approximately 200 times. It has grown from nearly 1 million protein structures to over 200 million, and now covers almost every organism on Earth that has had its genome sequenced. Predicted structures for a wide range of species, including plants, bacteria, animals, and other organisms are now included in the expanded database. This opens up new avenues of research across the life sciences that will have an impact on global challenges, including sustainability, food insecurity, and neglected diseases.

For now, the acrylic table is under construction and open only to the stuffed mouse, originally a cat toy, used to help set up the cameras. The toy squeaks when Kennedy presses it. “Usually, you do a surgery to remove the squeaker” before using them to set up experiments, says Kennedy, assistant professor of neuroscience at Northwestern University in Chicago, Illinois.

The playful squeak is a startling sound in a lab that is otherwise defined by the quiet of computational modeling. Among her projects, Kennedy is expanding her work with an artificial-intelligence-driven tool called the Mouse Action Recognition System (MARS) that can automatically classify mouse social behaviors. She also uses her modeling work to study how different brain areas and cell types interact with one another, and to connect neural activity with behaviors to learn how the brain integrates sensory information. In her office on the fifth floor of Northwestern’s Ward Building in downtown Chicago, most of this work happens on computers with data, code and graphs. Quiet also prevails in a room down the hall, where Kennedy’s small group of postdoctoral researchers and technicians sit at workstations in a lab that she launched less than a year and a half ago.

Kennedy’s ability to talk about abstract concepts, with a little stuffed animal as a prop, sets her apart, her colleagues say. She is a rare theoretical neuroscientist who can translate her mathematical work into real-world experiments. “That is her gift,” says Larry Abbott, a theoretical neuroscientist at Columbia University who was Kennedy’s graduate school advisor. “She’s good at the technical stuff, but if you can’t make that reach across to the data and the experiments, a person is not going to be that effective. She’s really just great at that — finding the right mathematics to apply to the particular problem that she’s looking at.”

By Natasha Vita-More.

Has the technological singularity in 2019 changed since the late 1990s?

As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics.