Toggle light / dark theme

Michael Levin, a developmental biologist at Tufts University, has a knack for taking an unassuming organism and showing it’s capable of the darnedest things. He and his team once extracted skin cells from a frog embryo and cultivated them on their own. With no other cell types around, they were not “bullied,” as he put it, into forming skin tissue. Instead, they reassembled into a new organism of sorts, a “xenobot,” a coinage based on the Latin name of the frog species, Xenopus laevis. It zipped around like a paramecium in pond water. Sometimes it swept up loose skin cells and piled them until they formed their own xenobot—a type of self-replication. For Levin, it demonstrated how all living things have latent abilities. Having evolved to do one thing, they might do something completely different under the right circumstances.

Slime mold grows differently depending on the music playing.

Not long ago I met Levin at a workshop on science, technology, and Buddhism in Kathmandu. He hates flying but said this event was worth it. Even without the backdrop of the Himalayas, his scientific talk was one of the most captivating I’ve ever heard. Every slide introduced some bizarre new experiment. Butterflies retain memories from when they were caterpillars, even though their brains turned to mush in the chrysalis. Cut off the head and tail of a planarian, or flatworm, and it can grow two new heads; if you amputate again, the worm will regrow both heads. Levin argues the worm stores the new shape in its body as an electrical pattern. In fact, he thinks electrical signaling is pervasive in nature; it is not limited to neurons. Recently, Levin and colleagues found that some diseases might be cured by retraining the gene and protein networks as one might train a neural network.

Generative AI art service Midjourney has added two new features that make images “weirder” and more panoramic. Midjourney has been rapidly spitting out features over the past few weeks, with the service expected to make the jump to version 6 before the end of July. In the meantime, customers have been given a selection of creative new toys to play with in version 5.2.


New features tell the Midjourney AI to make images weirder or to pan the camera in different directions.

In this informative and engaging video, we explore the fascinating world of quantum computing and its untapped potential. We delve into the challenges of building quantum computers and how artificial intelligence can help us overcome these challenges.

Whether you’re an AI or quantum computing enthusiast, or simply curious about the future of technology, this video is a must-watch. Join us as we unlock the potentials of quantum computing with AI and discover the limitless possibilities that lie ahead.

Don’t forget to subscribe to our channel if you find this video informative and helpful.

Priyanjali Gupta, an engineering student from Tamil Nadu’s Vellore Institute of Technology (VIT), has created an AI model which can translate American sign language to English in real-time. A third-year computer science student, Priyanjali Gupta is specialising in Data Science and developed the new model using Tensorflow object detection API and it is able to translate the signs using transfer learning from a pre-trained model named ssd_mobilenet.

Gupta has shared her creation on LinkedIn wherein she demonstrated the capabilities of her AI model in a demo video. According to her Github post, “The dataset is made manually by running the Image Collection Python file that collects images from your webcam for or all the mentioned below signs in the American Sign Language: Hello, I Love You, Thank you, Please, Yes and No”, She also displayed the same in her demo clip.

New work from Carnegie Mellon University has enabled robots to learn household chores by watching videos of people performing everyday tasks in their homes.

The research could help improve the utility of robots in the home, allowing them to assist people with tasks like cooking and cleaning. Two robots successfully learned 12 tasks including opening a drawer, oven door and lid; taking a pot off the stove; and picking up a telephone, vegetable or can of soup.

“The robot can learn where and how humans interact with different objects through watching videos,” said Deepak Pathak, an assistant professor in the Robotics Institute at CMU’s School of Computer Science. “From this knowledge, we can train a model that enables two robots to complete similar tasks in varied environments.”

Artificial intelligence can predict on-and off-target activity of CRISPR tools that target RNA instead of DNA, according to new research published in Nature Biotechnology.

The study by researchers at New York University, Columbia University, and the New York Genome Center, combines a with CRISPR screens to control the expression of human in different ways—such as flicking a light switch to shut them off completely or by using a dimmer knob to partially turn down their activity. These precise gene controls could be used to develop new CRISPR-based therapies.

CRISPR is a gene editing technology with many uses in biomedicine and beyond, from treating sickle cell anemia to engineering tastier mustard greens. It often works by targeting DNA using an enzyme called Cas9. In recent years, scientists discovered another type of CRISPR that instead targets RNA using an enzyme called Cas13.