Toggle light / dark theme

AI Predicts Activity of RNA-Targeting CRISPR Tools

Researchers at New York University (NYU), Columbia University, and the New York Genome Center have developed an artificial intelligence (AI) platform that can predict on-and off-target activity of CRISPR tools that target RNA instead of DNA.

The team paired a deep learning model with CRISPR screens to control the expression of human genes in different ways, akin to either flicking a light switch to shut them off completely or by using a dimmer knob to partially turn down their activity. The resulting neural network, which they called targeted inhibition of gene expression via gRNA design— TIGER—was able to predict efficacy from guide sequence and context. The team suggests the new technology could pave the way to the development of precise gene controls for use in CRISPR-based therapies.

“Our deep learning model can tell us not only how to design a guide RNA that knocks down a transcript completely, but can also ‘tune’ it—for instance, having it produce only 70% of the transcript of a specific gene,” said Andrew Stirn, a PhD student at Columbia Engineering and the New York Genome Center. Stirn is co-first author of the researchers’ published paper in Nature Biotechnology, titled “Prediction of on-target and off-target activity of CRISPR-Cas13D guide RNAs using deep learning.” In their paper, the researchers concluded, “We believe that TIGER predictions will enable ranking and ultimately avoidance of undesired off-target binding sites and nuclease activation, and further spur the development of RNA-targeting therapeutics.”

Could AI-powered robot ‘companions’ combat human loneliness?

Companion robots enhanced with artificial intelligence may one day help alleviate the loneliness epidemic, suggests a new report from researchers at Auckland, Duke, and Cornell Universities.

Their report, appearing in the July 12 issue of Science Robotics, maps some of the ethical considerations for governments, , technologists, and clinicians, and urges stakeholders to come together to rapidly develop guidelines for trust, agency, engagement, and real-world efficacy.

It also proposes a new way to measure whether a companion is helping someone.

Computer Vision System Marries Image Recognition and Generation

Computers possess two remarkable capabilities with respect to images: They can both identify them and generate them anew. Historically, these functions have stood separate, akin to the disparate acts of a chef who is good at creating dishes (generation), and a connoisseur who is good at tasting dishes (recognition).

Yet, one can’t help but wonder: What would it take to orchestrate a harmonious union between these two distinctive capacities? Both chef and connoisseur share a common understanding in the taste of the food. Similarly, a unified vision system requires a deep understanding of the visual world.

Now, researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have trained a system to infer the missing parts of an image, a task that requires deep comprehension of the image’s content. In successfully filling in the blanks, the system, known as the Masked Generative Encoder (MAGE), achieves two goals at the same time: accurately identifying images and creating new ones with striking resemblance to reality.

A connected robot team could improve our space exploration capabilities

A team of robots would still be able to complete a mission if one or two of the machines malfunction.

Swiss researchers led by ETH Zurich are exploring the possibility of sending an interconnected team of walking and flying exploration robots to the Moon, a press statement reveals.

In recent tests, the researchers equipped three ANYmal robots with scientific instruments to test whether they would be suitable for lunar exploitation.

Study provides unprecedented insights into the complexity of large-scale neural networks

That experiences leave their trace in the connectivity of the brain has been known for a while, but a pioneering study by researchers at the German Center for Neurodegenerative Diseases (DZNE) and TUD Dresden University of Technology now shows how massive these effects really are. The findings in mice provide unprecedented insights into the complexity of large-scale neural networks and brain plasticity. Moreover, they could pave the way for new brain-inspired artificial intelligence methods. The results, based on an innovative “brain-on-chip” technology, are published in the scientific journal Biosensors and Bioelectronics.

The Dresden researchers explored the question of how an enriched experience affects the brain’s circuitry. For this, they deployed a so-called neurochip with more than 4,000 electrodes to detect the electrical activity of brain cells. This innovative platform enabled registering the “firing” of thousands of neurons simultaneously. The area examined – much smaller than the size of a human fingernail – covered an entire mouse hippocampus. This brain structure, shared by humans, plays a pivotal role in learning and memory, making it a prime target for the ravages of dementias like Alzheimer’s disease. For their study, the scientists compared brain tissue from mice, which were raised differently. While one group of rodents grew up in standard cages, which did not offer any special stimuli, the others were housed in an “enriched environment” that included rearrangeable toys and maze-like plastic tubes.

“The results by far exceeded our expectations,” said Dr. Hayder Amin, lead scientist of the study. Amin, a neuroelectronics and nomputational neuroscience expert, heads a research group at DZNE. With his team, he developed the technology and analysis tools used in this study. “Simplified, one can say that the neurons of mice from the enriched environment were much more interconnected than those raised in standard housing. No matter which parameter we looked at, a richer experience literally boosted connections in the neuronal networks. These findings suggest that leading an active and varied life shapes the brain on whole new grounds.”

Role of AI in Spine Injuries in Sports

Sports teams spend millions of dollars on their players’ health and fitness and any injuries can be detrimental to their players’ careers. Artificial intelligence (AI) has the potential to significantly change the way that sports spine injuries are diagnosed, treated, and managed. Tools such as Spindle and SpindleX are making it easy to prevent long-term injuries or spinal issues by detecting even the minutest variations in time. However, AI has just begun its foray into the field of healthcare and more importantly radiology or spine imaging.

With AI-related radiology imaging, it is becoming easier to prevent and cure injuries we didn’t even know existed. AI-assisted reports are helping physicians and surgeons take better and more accurate decisions and treatment plans, saving millions of dollars in the healthcare industry. Here are a few examples of how AI is improving the treatment of sports-related spine injuries:

This CEO replaced 90% of support staff with an AI chatbot

The chief executive of an Indian startup laid off 90% of his support staff after the firm built a chatbot powered by artificial intelligence that he says can handle customer queries much faster than his employees.

Summit Shah, the founder and CEO of Dukaan, a Bangalore-based e-commerce company, said on Twitter Monday that the chatbot — built by one of the firm’s data scientists in two days — could respond to initial customer queries instantly, whereas his staff’s first responses were sent after an average of 1 minute and 44 seconds.

The average time taken to resolve a customer’s issue also dropped by almost 98% when they interacted with the chatbot, he tweeted.

Slack’s vision for enterprise AI: Empower ‘everybody to automate’

Join top executives in San Francisco on July 11–12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More

The messaging software company Slack sees massive potential in generative AI and large language models, allowing more automation to improve workplace productivity and efficiency, said Steve Wood, Slack’s SVP, product management at the VentureBeat Transform 2023 conference on Tuesday.

“For me, I think automation, integration and AI are going to have a profound impact on how we experience software going forward,” Wood said in his panel discussion with Brian Evergreen, founder and CEO of the Profitable Good Company, a leadership advisory firm.

How will religions deal with an omnipotent AI?

I’m excited to share my latest article with Aporia Magazine, where I’m writing a series of stories on transhumanism. My latest article, on AI and religion, is now out.


Written by Zoltan Istvan.

A consensus of 350 top AI experts believes that by 2060 engineers could create a superintelligence to rival the human mind. This machine intelligence might create complex symphonies, direct blockbuster movies and run market-beating companies. But would it be sophisticated enough to understand spirituality, practice a religion or commune with a higher power?

In conferences and forums around the world, theologians and scientists are trying to answer these questions. Some are even debating whether the superintelligence should be converted to a specific religious perspective when it arrives – and then maybe even saved.

Wired Magazine’s executive founding editor Kevin Kelly once said, “The creator made us as beings with free will and consciousness – we are going to do the same thing. We are going to make beings with free will and consciousness, because we are in the image of the creator.”

/* */