Toggle light / dark theme

AI Imagines the Last Selfies on Earth in Grisly Yet Stunningly Delightful Frames

So, Artificial intelligence predicts selfies would dominate, ghoulish humans, holding mobiles, at the end of the earth, an event that would destroy every sign of life. Indeed, it is hypothetical and difficult to imagine the situation. An AI image generator, Midjourney, an obscure but close associate of Open AI, imagined a few of them revealing how scary they can be. Shared by a tik-tok account, @Robot Overloads, the images were hellish in tone and gory in substance. The images generated depict disfigured human beings with eyes as big as rat holes and fingers long enough to scoop out curdled blood from creatures of another world. These frames artificial intelligence has generated go beyond the portrayal of annihilation. Firstly, they are cut off from reality, and secondly, they are very few. The end of the world is billion years away when selfies would become a fossilized concept and humans are considered biological ancestors of cyborgs.

The pictures are stunning though in the sense that the elements like huge explosions going off in the background while a man maniacally staring into the camera are included in one frame. The imaginative spark of artificial intelligence should really be appreciated here. Perhaps it must have taken a hint or two from images of people taking selfies in the backdrop of accidents and natural calamities, to use them as click baits. Apparently, image generators give the users the power to visualize their imagination, how much ever removed from reality. However, the netizens are finding them captivating pleasantly, so much so that one of them wonders if they are from nibiru or planet X theories!! That one tik-tok video has got more than 12.7 million views and the reply, “OK no more sleeping,” posted by a Tik Tok user summarises, more than anything, the superficiality of melodramatic AI’s image generating capability.

Self-Driving Truck Completes 950-Mile Trip 10 Hours Faster Than Human Driver

TuSimple, a transportation company focusing on driverless tech for trucks, recently transported a load of products with its autonomous truck systems.


The road to fully autonomous trucks is a long and winding one, but it’s not an impossible one, and it seems to be in closer reach than fully self-driving cars.

The company in charge of the feat was TuSimple, a transportation company focusing on driverless tech for trucks. Eighty percent of the journey, or 950 miles (1,528 km), was driven by the autonomous system, with a human at the wheel for the other 20 percent of the cross-country trip, and at-the-ready to take over the wheel if anything faulted with the technology.

MailOnline takes a look at technologies to remove plastic in oceans

These include aquatic drones that can be programmed to scoop up floating debris from the surface of rivers, and buggies that use artificial intelligence (AI) to search for and pick up litter for use on beaches.

Scientists are also hoping to scale up the use of magnetic nano-scale springs that hook on to microplastics and break them down.

MailOnline takes a closer a look at some of the technologies currently being used to reduce the man-made debris in our oceans, and those that are still in development.

The Omnid Mocobots: New mobile robots for safe and effective collaboration

Teams of mobile robots could be highly effective in helping humans to complete straining manual tasks, such as manufacturing processes or the transportation of heavy objects. In recent years, some of these robots have already been tested and introduced in real-world settings, attaining very promising results.

Researchers at Northwestern University’s Center for Robotics and Biosystems have recently developed new collaborative , dubbed Omnid Mocobots. These robots, introduced in a paper pre-published on arXiv, are designed to cooperate with each other and with humans to safely pick up, handle, and transport delicate and flexible payloads.

“The Center for Robotics and Biosystems has a long history building robots that collaborate physically with humans,” Matthew Elwin, one of the researchers who carried out the study, told TechXplore. “In fact, the term ‘cobots’ was coined here. The inspiration for the current work was manufacturing, warehouse, and construction tasks involving manipulating large, articulated, or flexible objects, where it is helpful to have several robots supporting the object.”

AI’s prediction of what last selfies on Earth would look like are total nightmare fuel

🤔 I certainly hope not!


An artificial intelligence program asked to predict what “the last selfie ever taken” would look like resulted in several nightmarish images.

TikTok account Robot Overloards, which dedicates its page to providing viewers with “daily disturbing AI generated images,” uploaded a video on Sunday where the AI DALL-E was asked to predict what the last selfies on Earth would look like.

The images produced showed bloody, mutilated humans taking selfies amongst apocalyptic scenes. One “selfie” shows a skeleton-like man holding the camera for a selfie with dark hills on fire and smoke in the air behind him.

A new robotic submersible could unlock the mysteries of Greenland’s underwater glaciers

You’re in for a surprise.

Picture the ocean, impacted by climate change.

Rising sea levels, ocean acidification, melting of ice sheets, flooded coastlines, and shrinking fish stocks — the image is largely negative. For the longest time, the ocean has been portrayed as a victim of climate change, and rightly so. Ulf Riebesell, Professor of Biological Oceanography at the Geomar Helmholtz Centre for Ocean Research Kiel, has studied the effects of global warming on the ocean for nearly 15 years, warning the scientific community about the impacts of climate change on ocean life and biochemical cycles. countries aiming to achieve a climate-neutral world by mid-century, experts have decided to include the ocean to tackle climate change.

Deep neural networks constrained by neural mass models improve electrophysiological source imaging of spatiotemporal brain dynamics

Many efforts have been made to image the spatiotemporal electrical activity of the brain with the purpose of mapping its function and dysfunction as well as aiding the management of brain disorders. Here, we propose a non-conventional deep learning–based source imaging framework (DeepSIF) that provides robust and precise spatiotemporal estimates of underlying brain dynamics from noninvasive high-density electroencephalography (EEG) recordings. DeepSIF employs synthetic training data generated by biophysical models capable of modeling mesoscale brain dynamics. The rich characteristics of underlying brain sources are embedded in the realistic training data and implicitly learned by DeepSIF networks, avoiding complications associated with explicitly formulating and tuning priors in an optimization problem, as often is the case in conventional source imaging approaches. The performance of DeepSIF is evaluated by 1) a series of numerical experiments, 2) imaging sensory and cognitive brain responses in a total of 20 healthy subjects from three public datasets, and 3) rigorously validating DeepSIF’s capability in identifying epileptogenic regions in a cohort of 20 drug-resistant epilepsy patients by comparing DeepSIF results with invasive measurements and surgical resection outcomes. DeepSIF demonstrates robust and excellent performance, producing results that are concordant with common neuroscience knowledge about sensory and cognitive information processing as well as clinical findings about the location and extent of the epileptogenic tissue and outperforming conventional source imaging methods. The DeepSIF method, as a data-driven imaging framework, enables efficient and effective high-resolution functional imaging of spatiotemporal brain dynamics, suggesting its wide applicability and value to neuroscience research and clinical applications.

DeepMind AI Powers Major Scientific Breakthrough: AlphaFold Generates 3D View of the Protein Universe

AI-powered predictions of the three-dimensional structures of nearly all cataloged proteins known to science have been made by DeepMind and EMBL’s European Bioinformatics Institute (EMBL-EBI). The catalog is freely and openly available to the scientific community, via the AlphaFold Protein Structure Database.

The two organizations hope the expanded database will continue to increase our understanding of biology, helping countless more scientists in their work as they strive to tackle global challenges.

This major milestone marks the database being expanded by approximately 200 times. It has grown from nearly 1 million protein structures to over 200 million, and now covers almost every organism on Earth that has had its genome sequenced. Predicted structures for a wide range of species, including plants, bacteria, animals, and other organisms are now included in the expanded database. This opens up new avenues of research across the life sciences that will have an impact on global challenges, including sustainability, food insecurity, and neglected diseases.

Rising star: Ann Kennedy bridges gap between biology, computational theory

For now, the acrylic table is under construction and open only to the stuffed mouse, originally a cat toy, used to help set up the cameras. The toy squeaks when Kennedy presses it. “Usually, you do a surgery to remove the squeaker” before using them to set up experiments, says Kennedy, assistant professor of neuroscience at Northwestern University in Chicago, Illinois.

The playful squeak is a startling sound in a lab that is otherwise defined by the quiet of computational modeling. Among her projects, Kennedy is expanding her work with an artificial-intelligence-driven tool called the Mouse Action Recognition System (MARS) that can automatically classify mouse social behaviors. She also uses her modeling work to study how different brain areas and cell types interact with one another, and to connect neural activity with behaviors to learn how the brain integrates sensory information. In her office on the fifth floor of Northwestern’s Ward Building in downtown Chicago, most of this work happens on computers with data, code and graphs. Quiet also prevails in a room down the hall, where Kennedy’s small group of postdoctoral researchers and technicians sit at workstations in a lab that she launched less than a year and a half ago.

Kennedy’s ability to talk about abstract concepts, with a little stuffed animal as a prop, sets her apart, her colleagues say. She is a rare theoretical neuroscientist who can translate her mathematical work into real-world experiments. “That is her gift,” says Larry Abbott, a theoretical neuroscientist at Columbia University who was Kennedy’s graduate school advisor. “She’s good at the technical stuff, but if you can’t make that reach across to the data and the experiments, a person is not going to be that effective. She’s really just great at that — finding the right mathematics to apply to the particular problem that she’s looking at.”