Toggle light / dark theme

Microscopic movies capture brain proteins in action, revealing new insight into shapes and functions

Our cells rely on microscopic highways and specialized protein vehicles to move everything—from positioning organelles to carting protein instructions to disposing of cellular garbage. These highways (called microtubules) and vehicles (called motor proteins) are indispensable to cellular function and survival.

The dysfunction of motor proteins and their associated proteins can lead to severe neurodevelopmental and neurodegenerative disorders. For example, the dysfunction of Lis1, a partner protein to the motor protein , can lead to the rare fatal birth defect lissencephaly, or “smooth brain,” for which there is no cure. But therapeutics that target and restore dynein or Lis1 function could change those dismal outcomes—and developing those therapeutics depends on thoroughly understanding how dynein and Lis1 interact.

New research from the Salk Institute and UC San Diego captured short movies of Lis1 “turning on” dynein. The movies allowed the team to catalog 16 shapes that the two proteins take as they interact, some of which have never been seen before. These insights will be foundational for designing future therapeutics that restore dynein and Lis1 function, since they shine a light on precise locations where drugs could interact with the proteins.

Bimodal video imaging platform predicts hyperspectral frames from RGB video

Hyperspectral imaging (HSI), or imaging spectroscopy, captures detailed information across the electromagnetic spectrum by acquiring a spectrum for each pixel in an image. This enables precise identification of materials through their spectral signatures.

HSI supports Earth remote sensing applications such as automated classification, abundance mapping, and estimation of physical and biological properties like soil moisture, sediment density, , biomass, leaf area, and pigment content.

Although HSI offers detailed insight into a remote sensing scene, HSI data may not be readily available for an intended application. Recent studies have attempted to combine HSI with traditional red-green-blue (RGB) video acquisition to lower costs and improve performance. However, this fusion technology still faces technical challenges.

The Kardashev Scale: Type 1 to Type 7 Civilizations

Make sure to watch this next video about Type 1 to Type 4 Civilizations: https://youtu.be/5fTNGvuPTMU.

💡 Future Business Tech explores AI, emerging technologies, and future technologies.

SUBSCRIBE: https://bit.ly/3geLDGO

This video explores the Kardashev scale and the type 1 to type 7 civilizations. Related terms: ai, future business tech, future technology, future tech, future business technologies, future technologies, artificial intelligence, kardashev scale, type 7 civilization, type 6 civilization, type 5 civilization, type 4 civilization, type 3 civilization, type 2 civilization, type 1 civilization, etc.

ℹ️ Some links are affiliate links. They cost you nothing extra but help support the channel so I can create more videos like this.

#technology #ai

Machine learning method generates circuit synthesis for quantum computing

Researchers from the University of Innsbruck have unveiled a novel method to prepare quantum operations on a given quantum computer, using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation.

The study, recently published in Nature Machine Intelligence, marks a significant step forward in realizing the full extent of .

Generative models like diffusion models are one of the most important recent developments in (ML), with models such as Stable Diffusion and DALL·E revolutionizing the field of image generation. These models are able to produce high quality images based on text description.

Anthropic CEO claims AI models hallucinate less than humans

Anthropic CEO Dario Amodei believes today’s AI models hallucinate, or make things up and present them as if they’re true, at a lower rate than humans do, he said during a press briefing at Anthropic’s first developer event, Code with Claude, in San Francisco on Thursday.

Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic’s path to AGI — AI systems with human-level intelligence or better.

“It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said, responding to TechCrunch’s question.

CNS-CLIP: Transforming a Neurosurgical Journal Into a… : Neurosurgery

Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.

METHODS:

We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.

/* */