Toggle light / dark theme

A Stanford Medicine study reveals six subtypes of depression, identified through brain imaging and machine learning. These subtypes exhibit unique brain activity patterns, helping predict which patients will benefit from specific antidepressants or behavioral therapies. This approach aims to personalize and improve depression treatment efficacy.

In the not-too-distant future, a quick brain scan during a screening assessment for depression could identify the best treatment.

According to a new study led by researchers at Stanford Medicine, brain imaging combined with a type of AI called machine learning can reveal subtypes of depression and anxiety. The study, to be published today (June 17) in the journal Nature Medicine, sorts depression into six biological subtypes, or “biotypes,” and identifies treatments that are more likely or less likely to work for three of these subtypes.

From mundane chores to complex interactions, RoboCasa trains robots to navigate the real world:


Researchers have developed a large-scale simulation framework for training general-purpose robots in diverse, everyday settings.

The framework, called RoboCasa, provides access to thousands of 3D assets over more than 150 object categories, as well as dozens of furniture and appliances that may be interacted with.

A range of generative AI tools are used to increase realism and diversity, including text-to-3D models for object assets and text-to-image models for environmental textures.

Delicious.


Science publisher Springer Nature has developed two new AI tools to detect fake research and duplicate images in scientific papers, helping to protect the integrity of published studies.

The growing number of cases of fake research is already putting a strain on the scientific publishing industry, according to Springer Nature. Following a pilot phase, the publisher is now rolling out two AI tools to identify papers with AI-generated fake content and problematic images — both red flags for research integrity issues.

The first tool, called “Geppetto,” detects AI-generated content, a telltale sign of “paper mills” producing fake research papers. The tool divides the paper into sections and uses its own algorithms to check the consistency of the text in each section.

Researchers from Tohoku University and Kyoto University have successfully developed a DNA-based molecular controller that autonomously directs the assembly and disassembly of molecular robots. This pioneering technology marks a significant step towards advanced autonomous molecular systems with potential applications in medicine and nanotechnology.

Details of the breakthrough were published in the journal Science Advances (“Autonomous assembly and disassembly of gliding molecular robots regulated by a DNA-based molecular controller”).

“Our newly developed molecular controller, composed of artificially designed DNA molecules and enzymes, coexists with molecular robots and controls them by outputting specific DNA molecules,” points out Shin-ichiro M. Nomura, an associate professor at Tohoku University’s Graduate School of Engineering and co-author of the study. “This allows the molecular robots to self-assemble and disassemble automatically, without the need for external manipulation.”

Internet data scraping is one of the biggest fights in AI right now. Tech companies argue that anything on the public internet is fair game, but they are facing a barrage of lawsuits over their data practices and copyright. It will likely take years until clear rules are in place.

In the meantime, they are running out of training data to build even bigger, more powerful models, and to Meta, your posts are a gold mine.

If you’re uncomfortable with having Meta use your personal information and intellectual property to train its AI models in perpetuity, consider opting out. Although Meta does not guarantee it will allow this, it does say it will “review objection requests in accordance with relevant data protection laws.”

From king’s college london, carnegie mellon, & U birmingham.

Llm-driven robots risk enacting discrimination, violence, and unlawful actions.

Rumaisa Azeem, Andrew Hundt, Masoumeh Mansouri, Martim Brandão June 2024 Paper: https://arxiv.org/abs/2406.08824 Code: https://github.com/SepehrDehdashtian/


The data and code for paper ‘The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models’ — SepehrDehdashtian/the-dark-side-of-dataset-scaling.

This review spotlights the revolutionary role of deep learning (DL) in expanding the understanding of RNA is a fundamental biomolecule that shapes and regulates diverse phenotypes including human diseases. Understanding the principles governing the functions of RNA is a key objective of current biology. Recently, big data produced via high-throughput experiments have been utilized to develop DL models aimed at analyzing and predicting RNA-related biological processes. This review emphasizes the role of public databases in providing these big data for training DL models. The authors introduce core DL concepts necessary for training models from the biological data. By extensively examining DL studies in various fields of RNA biology, the authors suggest how to better leverage DL for revealing novel biological knowledge and demonstrate the potential of DL in deciphering the complex biology of RNA.

This summary was initially drafted using artificial intelligence, then revised and fact-checked by the author.

Gene expression is inherently dynamic, due to complex regulation and stochastic biochemical events. Here the authors train a deep neural network to predict and dynamically control gene expression in thousands of individual bacteria in real-time which they then apply to control antibiotic resistance and study single-cell survival dynamics.