Toggle light / dark theme

A causal relationship exists among the aging process, organ decay and dis-function, and the occurrence of various diseases including cancer. A genetically engineered mouse model, termed EklfK74R/K74R or Eklf (K74R), carrying mutation on the well-conserved sumoylation site of the hematopoietic transcription factor KLF1/ EKLF has been generated that possesses extended lifespan and healthy characteristics including cancer resistance. We show that the high anti-cancer capability of the Eklf (K74R) mice are gender-, age-and genetic background-independent. Significantly, the anti-cancer capability and extended lifespan characteristics of Eklf (K74R) mice could be transferred to wild-type mice via transplantation of their bone marrow mononuclear cells. Targeted/global gene expression profiling analysis has identified changes of the expression of specific proteins and cellular pathways in the leukocytes of the Eklf (K74R) that are in the directions of anti-cancer and/or anti-aging. This study demonstrates the feasibility of developing a novel hematopoietic/ blood system for long-term anti-cancer and, potentially, for anti-aging.

The authors have declared no competing interest.

Get early access to our latest psychology lectures: http://bit.ly/new-talks5

If I have a visual experience that I describe as a red tomato a meter away, then I am inclined to believe that there is, in fact, a red tomato a meter away, even if I close my eyes. I believe that my perceptions are, in the normal case, veridical—that they accurately depict aspects of the real world. But is my belief supported by our best science? In particular: Does evolution by natural selection favor veridical perceptions? Many scientists and philosophers claim that it does. But this claim, though plausible, has not been properly tested. In this talk, I present a new theorem: Veridical perceptions are never more fit than non-veridical perceptions which are simply tuned to the relevant fitness functions. This entails that perception is not a window on reality; it is more like a desktop interface on your laptop. I discuss this interface theory of perception and its implications for one of the most puzzling unsolved problems in science: the relationship between brain activity and conscious experiences.

Prof. Donald Hoffman, PhD received his PhD from MIT, and joined the faculty of the University of California, Irvine in 1983, where he is a Professor Emeritus of Cognitive Sciences. He is an author of over 100 scientific papers and three books, including Visual Intelligence, and The Case Against Reality. He received a Distinguished Scientific Award from the American Psychological Association for early career research, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. His writing has appeared in Edge, New Scientist, LA Review of Books, and Scientific American and his work has been featured in Wired, Quanta, The Atlantic, and Through the Wormhole with Morgan Freeman. You can watch his TED Talk titled “Do we see reality as it is?” and you can follow him on Twitter @donalddhoffman.

Links:

ChatGPT’s immense popularity and power make it eye-wateringly expensive to maintain, The Information reports, with OpenAI paying up to $700,000 a day to keep its beefy infrastructure running, based on figures from the research firm SemiAnalysis.

“Most of this cost is based around the expensive servers they require,” Dylan Patel, chief analyst at the firm, told the publication.

The costs could be even higher now, Patel told Insider in a follow-up interview, because these estimates were based on GPT-3, the previous model that powers the older and now free version of ChatGPT.

Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.

Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.

In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.

EXPERTS:

It’s a step that could one day lead to advances for humans that boost quality of life for many by: giving amputees and those with spinal injuries control of advanced prosthetics, stimulating the sacral nerve to restore bladder control, stimulating the cervical vagus nerve to treat epilepsy and providing deep brain stimulation as a possible treatment for Parkinson’s.

How far would you go to keep your mind from failing? Would you go so far as to let a doctor drill a hole in your skull and stick a microchip in your brain?

It’s not an idle question. In recent years neuroscientists have made major advances in cracking the code of memory, figuring out exactly how the human brain stores information and learning to reverse-engineer the process. Now they’ve reached the stage where they’re starting to put all of that theory into practice.

Last month two research teams reported success at using electrical signals, carried into the brain via implanted wires, to boost memory in small groups of test patients. “It’s a major milestone in demonstrating the ability to restore memory function in humans,” says Dr. Robert Hampson, a neuroscientist at Wake Forest School of Medicine and the leader of one of the teams.