Toggle light / dark theme

Lol seethe Daddy.


In a lab at the University of Washington, robots are playing air hockey.

Or they’re solving Rubik’s Cubes, mastering chess or painting the next Mona Lisa with a single laser beam.

As the robots play, the researchers who built them are learning more about how they work, how they think and where they have room to grow, said Xu Chen, one of those researchers and an associate professor of mechanical engineering at UW.

The first experimental evidence to validate a newly published universal law that provides insights into the complex energy states for liquids has been.


Sending miniature robots deep inside the human skull to treat brain disorders has long been the stuff of science fiction—but it could soon become reality, according to a California start-up.

Neuroscientists from St. Petersburg University, led by Professor Allan V. Kalueff, in collaboration with an international team of IT specialists, have become the first in the world to apply the artificial intelligence (AI) algorithms to phenotype zebrafish psychoactive drug responses. They managed to train AI to determine—by fish response—which psychotropic agents were used in the experiment.

The research findings are published in the journal Progress in Neuro-Psychopharmacology and Biological Psychiatry.

The zebrafish (Danio rerio) is a freshwater bony fish that is presently the second-most (after mice) used model organism in biomedical research. The advantages for utilizing zebrafish as a model biological system are numerous, including low maintenance costs and high genetic and physiological similarity to humans. Zebrafish share 70% of genes with us. Furthermore, the simplicity of the zebrafish nervous system enables researchers to achieve more explicit and accurate results, as compared to studies with more complex organisms.

Can robots adapt their own working methods to solve complex tasks? Researchers at Chalmers University of Technology, Sweden, have developed a new form of AI, which, by observing human behavior, can adapt to perform its tasks in a changeable environment. The hope is that robots that can be flexible in this way will be able to work alongside humans to a much greater degree.

“Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way. An important area in development, therefore, is to teach robots how to work alongside humans in dynamic environments,” says Maximilian Diehl, Doctoral Student at the Department of Electrical Engineering at Chalmers University of Technology and main researcher behind the project.

When humans carry out a simple task, such as setting a table, we might approach the challenge in several different ways, depending on the conditions. If a chair unexpectedly stands in the way, we could choose to move it or walk around it. We alternate between using our right and left hands, we take pauses, and perform any number of unplanned actions.

In 2020, OpenAI introduced GPT-3 and, a year later, DALL.E, a 12 billion parameter model, built on GPT-3. DALL.E was trained to generate images from text descriptions, and the latest release, DALL.E 2, generates even more realistic and accurate images with 4x better resolution. The model takes natural language captions and uses a dataset of text-image pairings to create realistic images. Additionally, it can take an image and create different variations inspired by original images.

DALL.E leverages the ‘diffusion’ process to learn the relationship between images and text descriptions. In diffusion, it starts with a pattern of random dots and tracks it towards an image when it recognises aspects of it. Diffusion models have emerged as a promising generative modelling framework and push the state-of-the-art image and video generation tasks. The guidance technique is leveraged in diffusion to improve sample fidelity for images and photorealism. DALL.E is made up of two major parts: a discrete autoencoder that accurately represents images in compressed latent space and a transformer that learns the correlations between language and this discrete image representation. Evaluators were asked to compare 1,000 image generations from each model, and DALL·E 2 was preferred over DALL·E 1 for its caption matching and photorealism.

DALL-E is currently only a research project, and is not available in OpenAI’s API.

A team of researchers at Stanford University, working with a colleague at the Chinese Academy of Sciences, has built an AI-based filtration system to remove noise from seismic sensor data in urban areas. In their paper published in the journal Science Advances, the group describes training their application and testing it against real data from a prior seismic event.

In order to provide advance warning when an earthquake is detected, scientists have placed seismometers in earthquake-prone areas, including where quakes do the most damage and harm or kill the most people. But seismologists have found it troublesome to sort out related to natural ground movements from data related to city life. They note that human activities in cities, such as vehicles and trains, produce a lot of seismic noise. In this new effort, the researchers developed a deep learning application that determines which seismic data is natural and which is man-made and filters out those that are non-natural.

The researchers call their new application UrbanDenoiser. It was built using a deep-learning application and trained on 80,000 samples of urban seismic noise along with 33,751 samples from recorded natural seismic activity. The team applied their filtering system to seismic data recorded in Long Beach, California, to see how well it worked. They found it improved the level of desired signals compared to background noise by approximately 15 decibels. Satisfied with the results, they used UrbanDenoiser to analyze data from an earthquake that struck a nearby area in 2014. They found the application was able to detect four times the amount of data compared to the sensors without the filtering.