Toggle light / dark theme

This is interesting. 😃


A new discovery in rats shows that the brain responds differently in immersive virtual reality environments versus the real world. The finding could help scientists understand how the brain brings together sensory information from different sources to create a cohesive picture of the world around us. It could also pave the way for “virtual reality therapy” for learning and memory-related disorders ranging including ADHD, Autism, Alzheimer’s disease, epilepsy and depression.

Mayank Mehta, PhD, is the head of W. M. Keck Center for Neurophysics and a professor in the departments of physics, neurology, and electrical and computer engineering at UCLA. His laboratory studies a brain region called the hippocampus, which is a primary driver of learning and memory, including spatial navigation. To understand its role in learning and memory, the hippocampus has been extensively studied in rats as they perform spatial navigation tasks.

A team of researchers at Washington University School of Medicine have developed a deep learning model that is capable of classifying a brain tumor as one of six common types using a single 3D MRI scan, according to a study published in Radiology: Artificial Intelligence.

“This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume,” said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology’s Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.

The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope.

Widespread human SARS-CoV-2 infections combined with human-wildlife interactions create the potential for reverse zoonosis from humans to wildlife. We targeted white-tailed deer (Odocoileus virginianus) for serosurveillance based on evidence these deer have ACE2 receptors with high affinity for SARS-CoV-2, are permissive to infection, exhibit sustained viral shedding, can transmit to conspecifics, and can be abundant near urban centers. We evaluated 624 pre-and post-pandemic serum samples from wild deer from four U.S. states for SARS-CoV-2 exposure. Antibodies were detected in 152 samples (40%) from 2,021 using a surrogate virus neutralization test. A subset of samples was tested using a SARS-CoV-2 virus neutralization test with high concordance between tests. These data suggest white-tailed deer in the populations assessed have been exposed to SARS-CoV-2.

One-Sentence Summary Antibodies to SARS-CoV-2 were detected in 40% of wild white-tailed deer sampled from four U.S. states in 2021.

SARS-CoV-2, the virus that causes COVID-19 in humans, can infect multiple domestic and wild animal species (1 – 7). Thus, the possibility exists for the emergence of new animal reservoirs of SARS-CoV-2, each with unique potential to maintain, disseminate, and drive novel evolution of this virus. Of particular concern are wildlife species that are both abundant and live in close association with human populations (5).

Multisystem inflammatory syndrome in children (MIS-C) onsets in COVID-19 patients with manifestations similar to Kawasaki disease (KD). Here the author probe the peripheral blood transcriptome of MIS-C patients to find signatures related to natural killer (NK) cell activation and CD8+ T cell exhaustion that are shared with KD patients.

Materials that change their properties in response to certain stimuli could come to occupy a valuable space in many fields, ranging from robotics, to medical care, to advanced aircraft. A new example of this type of shape-shifting technology is modeled on ancient chain mail armor, enabling it to swiftly switch from flexible to stiff thanks to carefully arranged interlocking particles.

Contact Seller


Message.

Yes this says a 3 year epigenetic clock reversal in just 8 weeks thanks to diet and lifestyle changes. There is a list of supplements too:

Alpha ketoglutarate, vitamin C and vitamin A curcumin, epigallocatechin gallate (EGCG), rosmarinic acid, quercetin, luteolin.


Manipulations to slow biological aging and extend healthspan are of interest given the societal and healthcare costs of our aging population. Herein we report on a randomized controlled clinical trial conducted among 43 healthy adult males between the ages of 50–72. The 8-week treatment program included diet, sleep, exercise and relaxation guidance, and supplemental probiotics and phytonutrients. The control group received no intervention. Genome-wide DNA methylation analysis was conducted on saliva samples using the Illumina Methylation Epic Array and DNAmAge was calculated using the online Horvath DNAmAge clock (2013). The diet and lifestyle treatment was associated with a 3.23 years decrease in DNAmAge compared with controls (p=0.018). DNAmAge of those in the treatment group decreased by an average 1.96 years by the end of the program compared to the same individuals at the beginning with a strong trend towards significance (p=0.066). Changes in blood biomarkers were significant for mean serum 5-methyltetrahydrofolate (+15%, p=0.004) and mean triglycerides (−25%, p=0.009). To our knowledge, this is the first randomized controlled study to suggest that specific diet and lifestyle interventions may reverse Horvath DNAmAge (2013) epigenetic aging in healthy adult males. Larger-scale and longer duration clinical trials are needed to confirm these findings, as well as investigation in other human populations.

Keywords: DNA methylation, epigenetic, aging, lifestyle, biological clock.

đŸ¶ Lifelike & Realistic Pets on Amazon: https://amzn.to/3uegCXk đŸ±
đŸ”„ Robot Dog Kit on Amazon: https://amzn.to/3jirOw9
đŸ€– Robot Dogs that are just like Real Dogs: https://youtu.be/pifs1DE-Ys4

▶ Subscribe: https://bit.ly/3eLtWLS

✹ Instagram: https://www.instagram.com/Robotix_with_Sina.
📌My Amazon Pick: https://www.amazon.com/shop/iloverobotics?tag=lifeboatfound-20.
————————
A company called Tombot thinks it’s come up with a way to improve the quality of life for seniors facing challenges when it comes to being social: a robotic companion dog that behaves and responds like a real pup, but without all the responsibilities of maintaining a living, breathing animal. The company even enlisted the talented folks at the Jim Henson’s Creature Shop to help make the robo-dog look as lifelike as possible. It’s a noble effort, but it also raises lots of questions.

For starters, can robots actually be a good substitute for an animal companion? Replacing people with robots is a massive technological challenge—and one we’re not even close to accomplishing. Every time a multi-million dollar humanoid robot like Boston Dynamics’ ATLAS takes a nasty spill, we’re reminded that they’re nowhere near ready to interact with the average consumer. But robotic animals are a different story. It’s hard not to draw comparisons to a well-trained dog when seeing Boston Dynamics’ SpotMini in action. And even though it still comes with a price tag that soars to hundreds of thousands of dollars, there are robotic pets available on the other end of the affordability spectrum.

I think SENS did this last year but now AlphaFold2 will make it easier and faster.


Hey it’s Han from WrySci HX discussing how breakthroughs in the protein folding problem by AlphaFold 2 from DeepMind could combine with the SENS research foundation’s approach of allotopic mitochondrial gene expression to fight aging damage. More below ↓↓↓

Subscribe! =]

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.