Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Ultra-Bright and —Stable Red and Near-Infrared Squaraine Fluorophores for In Vivo Two-Photon Imaging

Fluorescent dyes that are bright, stable, small, and biocompatible are needed for high-sensitivity two-photon imaging, but the combination of these traits has been elusive. We identified a class of squaraine derivatives with large two-photon action cross-sections (up to 10,000 GM) at near-infrared wavelengths critical for in vivo imaging. We demonstrate the biocompatibility and stability of a red-emitting squaraine-rotaxane (SeTau-647) by imaging dye-filled neurons in vivo over 5 days, and utility for sensitive subcellular imaging by synthesizing a specific peptide-conjugate label for the synaptic protein PSD-95.

Elon Musk on DOGE, Optimus, Starlink Smartphones, Evolving with AI, Why the West is Imploding

Questions to inspire discussion.

🧠 Q: What improvements does Tesla’s AI5 chip offer over AI4? A: AI5 provides a 40x improvement in silicon, addressing core limitations of AI4, with 8x more compute, 9x more memory, 5x more memory bandwidth, and the ability to easily handle mixed precision models.

📱 Q: How will Starlink-enabled smartphones revolutionize connectivity? A: Starlink-enabled smartphones will allow direct high bandwidth connectivity from satellites to phones, requiring hardware changes in phones and collaboration between satellite providers and handset makers.

🌐 Q: What is Elon Musk’s vision for Starlink as a global carrier? A: Musk envisions Starlink as a global carrier working worldwide, offering users a comprehensive solution for high bandwidth at home and direct to cell through one direct deal.

🚀 Q: What are the expected capabilities of SpaceX’s Starship? A: Starship is projected to demonstrate full reusability next year, carrying over 100 tons to orbit, being five times bigger than Falcon Heavy, and capable of catching both the booster and ship.

AI and Compute.

How scientists got a glimpse of the inner workings of protein language models

Now, a team of researchers based at the Massachusetts Institute of Technology (the United States) has tried to shed light on the inner workings of the language models that predict the structure and function of proteins by using an innovative technique. They have described their findings in the study, ‘Sparse autoencoders uncover biologically interpretable features in protein language model representations’, which was published in the journal Proceedings of the National Academy of Sciences last month. The team included Onkar Gujral, Mihir Bafna, Eric Alm, and Bonnie Berger.

Story continues below this ad.

Berger, the senior author of the study, told The Indian Express over email, “This is the first work that allows us to look inside the ‘black box’ of protein language models to gain insights into why they function as they do.”

Scientists Turned Our Cells Into Quantum Computers—Sort Of

For the protein qubit to “encode” more information about what is going on inside a cell, the fluorescent protein needs to be genetically engineered to match the protein scientists want to observe in a given cell. The glowing protein is then attached to the target protein and zapped with a laser so it reaches a state of superposition, turning it into a nano-probe that picks up what is happening in the cell. From there, scientists can infer how a certain biological process happens, what the beginnings of a genetic disease look like, or how cells respond to certain treatments.

And eventually, this kind of sensing could be used in non-biological applications as well.

“Directed evolution on our EYFP qubit could be used to optimize its optical and spin properties and even reveal unexpected insights into qubit physics,” the researchers said. “Protein-based qubits are positioned to take advantage of techniques from both quantum information sciences and bioengineering, with potentially transformative possibilities in both fields.”

How early brain structure primes itself to learn efficiently

Vision happens when patterns of light entering the eye are converted into reliable patterns of brain activity. This reliability allows the brain to recognize the same object each time it is seen. Our brains, however, are not born with this ability; instead, we develop it through visual experience. Collaborating scientists at MPFI and the Frankfurt Institute for Advanced Studies have recently discovered key circuit changes that lead to the maturation of reliable brain activity patterns.

Their findings, published in Neuron this week, are likely generalizable beyond vision, providing a framework to understand the brain’s unique ability to adapt and learn quickly during the earliest stages of development.

The brain is a highly organized structure. Like other , visual areas have structure to them, which scientists call modules. This modular organization consists of patches of neurons that activate together in response to specific information. For example, some patches of neurons activate together in response to seeing vertical stripes, while other patches activate when horizontal stripes are seen.

/* */