Toggle light / dark theme

Just published from my son.

Automatic hippocampus imaging, with about 20 minutes of cloud computing per scan.


Like neocortical structures, the archicortical hippocampus differs in its folding patterns across individuals. Here, we present an automated and robust BIDS-App, HippUnfold, for defining and indexing individual-specific hippocampal folding in MRI, analogous to popular tools used in neocortical reconstruction. Such tailoring is critical for inter-individual alignment, with topology serving as the basis for homology. This topological framework enables qualitatively new analyses of morphological and laminar structure in the hippocampus or its subfields. It is critical for refining current neuroimaging analyses at a meso-as well as micro-scale. HippUnfold uses state-of-the-art deep learning combined with previously developed topological constraints to generate uniquely folded surfaces to fit a given subject’s hippocampal conformation. It is designed to work with commonly employed sub-millimetric MRI acquisitions, with possible extension to microscopic resolution. In this paper, we describe the power of HippUnfold in feature extraction, and highlight its unique value compared to several extant hippocampal subfield analysis methods.

Keywords: Brain Imaging Data Standards; computational anatomy; deep learning; hippocampal subfields; hippocampus; human; image segmentation; magnetic resonance imaging; neuroscience.

© 2022, DeKraker et al.

Even though the clinical efficacy of antibody-based therapeutics has been established, no methods that involve the de novo design of antibodies with wet lab validation are available.

About the study

A recent study, posted in the bioRxiv* preprint server, used generative AI models to develop de novo design antibodies against three distinct targets in a zero-shot fashion. A zero-shot designing method involves designing an antibody to bind to an antigen without follow-up optimization. The newly designed process has been termed de novo, meaning proteins (antibodies) were designed from first principles or from scratch.

On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone.

Microsoft calls VALL-E a “neural codec language model,” and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts.

On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, The Verge reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

On Discord, users sign into the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., “What’s the darkest thought you have about this?”). It then shares a person’s concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own.

This blog post was co-authored with Guy Eyal, an NLP team leader at Gong.

TL;DR: In 2022, large models achieved state-of-the-art results in various tasks and domains. A significant breakthrough in natural language processing (NLP) was achieved when models were trained to align with user intent and human preferences, leading to improved generation quality. Looking ahead to 2023, we can expect to see new methods to improve the alignment process (such as reinforcement learning with AI feedback), the development of automatic metrics for understanding alignment effectiveness, and the emergence of personalized aligned models, even in an online manner. There may also be a focus on addressing factuality issues as well as developing open-source tools and specialized compute resources to allow the industrial scale of aligned models. In addition to NLP, there will likely be progress in other modalities such as audio processing, computer vision, and robotics, and the development of multimodal models.

2022 was an excellent year for machine learning, with numerous large language models (LLMs) published and achieving state-of-the-art results across various benchmarks. These LLMs demonstrated their superior performance through few-shot learning, surpassing smaller models that had been fine-tuned on the same tasks [1–3]. This has the potential to reduce the need for specialized, in-domain datasets. Techniques like Chain of Thoughts [4] and Self Consistency [5] also helped to improve the reasoning capabilities of LLMs, leading to significant gains on reasoning benchmarks.

The eerie new capabilities of artificial intelligence are about to show up inside a courtroom — in the form of an AI chatbot lawyer that will soon argue a case in traffic court.

That’s according to Joshua Browder, the founder of a consumer-empowerment startup who conceived of the scheme.

Sometime next month, Browder is planning to send a real defendant into a real court armed with a recording device and a set of earbuds. Browder’s company will feed audio of the proceedings into an AI that will in turn spit out legal arguments; the defendant, he says, has agreed to repeat verbatim the outputs of the chatbot to an unwitting judge.