Toggle light / dark theme

Using the James Webb Space Telescope (JWST) and the Hubble Space Telescope (HST), astronomers from the University of Padua, Italy, and elsewhere have observed a metal-poor globular cluster known as Messier 92. The observations deliver crucial information regarding multiple stellar populations in this cluster. Results were published April 12 on the arXiv pre-print server.

Studies show that almost all (GCs) exhibit star-to-star abundance variations of light elements such as helium (He), oxygen (O), nitrogen (N), carbon © and calcium (Na). This indicates self-enrichment in GCs and suggests that are composed of at least two stellar populations.

Located some 26,700 away in the constellation of Hercules, Messier 92 (or M92 for short) is a GC with a metallicity of just-2.31 and a mass of about 200,000 . The cluster, estimated to be 11.5 billion years old, is known to host at least two stellar generations of stars—named 1G and 2G. Previous studies have found that Messier 92 has an extended 1G sequence, which hosts about 30.4% of cluster stars, and two distinct groups of 2G stars (2GA and 2GB).

A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing—a method that can perform certain calculations significantly more efficiently than conventional computing.

“In an era of it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which is published in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory—similar to those that describe the fundamental particles in our universe.”

Sels adds that the breakthrough offers promise for technological advancement.

Last 2020, scientists were able to pick up distinct brain signals that had never been observed before. Such findings hint at how the brain is a more powerful computational device than previously thought.

Distinct Brain Signals

According to Science Alert, back then, researchers from German and Greek institutes were able to report a brain mechanism in the outer cortical cells. They reported their discoveries in the Science journal.

Moiré patterns occur everywhere. They are created by layering two similar but not identical geometric designs. A common example is the pattern that sometimes emerges when viewing a chain-link fence through a second chain-link fence.

For more than 10 years, scientists have been experimenting with the moiré pattern that emerges when a sheet of graphene is placed between two sheets of . The resulting moiré pattern has shown tantalizing effects that could vastly improve that are used to power everything from computers to cars.

A new study led by University at Buffalo researchers, and published in Nature Communications, demonstrated that graphene can live up to its promise in this context.

Get early access to our latest psychology lectures: http://bit.ly/new-talks5

If I have a visual experience that I describe as a red tomato a meter away, then I am inclined to believe that there is, in fact, a red tomato a meter away, even if I close my eyes. I believe that my perceptions are, in the normal case, veridical—that they accurately depict aspects of the real world. But is my belief supported by our best science? In particular: Does evolution by natural selection favor veridical perceptions? Many scientists and philosophers claim that it does. But this claim, though plausible, has not been properly tested. In this talk, I present a new theorem: Veridical perceptions are never more fit than non-veridical perceptions which are simply tuned to the relevant fitness functions. This entails that perception is not a window on reality; it is more like a desktop interface on your laptop. I discuss this interface theory of perception and its implications for one of the most puzzling unsolved problems in science: the relationship between brain activity and conscious experiences.

Prof. Donald Hoffman, PhD received his PhD from MIT, and joined the faculty of the University of California, Irvine in 1983, where he is a Professor Emeritus of Cognitive Sciences. He is an author of over 100 scientific papers and three books, including Visual Intelligence, and The Case Against Reality. He received a Distinguished Scientific Award from the American Psychological Association for early career research, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. His writing has appeared in Edge, New Scientist, LA Review of Books, and Scientific American and his work has been featured in Wired, Quanta, The Atlantic, and Through the Wormhole with Morgan Freeman. You can watch his TED Talk titled “Do we see reality as it is?” and you can follow him on Twitter @donalddhoffman.

Links:

Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.

Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.

In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.

How far would you go to keep your mind from failing? Would you go so far as to let a doctor drill a hole in your skull and stick a microchip in your brain?

It’s not an idle question. In recent years neuroscientists have made major advances in cracking the code of memory, figuring out exactly how the human brain stores information and learning to reverse-engineer the process. Now they’ve reached the stage where they’re starting to put all of that theory into practice.

Last month two research teams reported success at using electrical signals, carried into the brain via implanted wires, to boost memory in small groups of test patients. “It’s a major milestone in demonstrating the ability to restore memory function in humans,” says Dr. Robert Hampson, a neuroscientist at Wake Forest School of Medicine and the leader of one of the teams.

As fun as brain-computer interfaces (BCI) are, for the best results they tend to come with the major asterisk of requiring the cutting and lifting of a section of the skull in order to implant a Utah array or similar electrode system. A non-invasive alternative consists out of electrodes which are placed on the skin, yet at a reduced resolution. These electrodes are the subject of a recent experiment by [Shaikh Nayeem Faisal] and colleagues in ACS Applied NanoMaterials employing graphene-coated electrodes in an attempt to optimize their performance.

Although external electrodes can be acceptable for basic tasks, such as registering a response to a specific (visual) impulse or for EEG recordings, they can be impractical in general use. Much of this is due to the disadvantages of the ‘wet’ and ‘dry’ varieties, which as the name suggests involve an electrically conductive gel with the former.

This gel ensures solid contact and a resistance of no more than 5 – 30 kΩ at 50 Hz, whereas dry sensors perform rather poorly at 200 kΩ at 50 Hz with worse signal-to-noise characteristics, even before adding in issues such as using the sensor on a hairy scalp, as tends to be the case for most human subjects.

Neuralace™ is a glimpse of what’s possible in the future of BCI.

This patent pending concept technology is the start of Blackrock’s journey toward whole-brain data capture–with transformative potential for the way neurological disorders are treated. With over 10,000 channels and a flexible lace structure that seamlessly conforms to the brain, Neuralace has potential applications in vision and memory restoration, performance prediction, and the treatment of mental health disorders like depression.

Neuralace is:
Ultra-High Channel Count | Wireless | Customizable | Flexible | Thinner than an eyelash.

The possibilities are endless… Whole-brain data capture | Seamless connectivity | Improved biocompatibility About Blackrock Neurotech Blackrock Neurotech is a team of the world’s leading engineers, neuroscientists, and visionaries. Our mission is simple: We want people with neurological disorders to walk, talk, see, hear, and feel again. We’re engineering the next generation of neural implants, including implantable brain-computer interface technology that restores function and independence to individuals with neurological disorders. Join us in changing lives today. Connect with us: Join Our Team | https://bit.ly/3bCsXRv LinkedIn | https://bit.ly/3PfifOL Twitter | https://bit.ly/3PfifOL Instagram | https://bit.ly/3bMaYrW Facebook | https://bit.ly/3JRc2av Clinical Trials | https://bit.ly/3A8QPWm Our site | https://blackrockneurotech.com.

Blackrock’s long-tested NeuroPort® Array, widely considered the gold standard of high-channel neural interfacing, has been used in human BCIs since 2004 and powered many of the field’s most significant milestones. In clinical trials, patients using Blackrock’s BCI have regained tactile function, movement of their own limbs and prosthetics, and the ability to control digital devices, despite diagnoses of paralysis and other neurological disorders.

While Blackrock’s BCI enables patients to execute sophisticated functions without reliance on assistive technologies, next-generation BCIs for areas such as vision and memory restoration, performance prediction, and treatment of mental health disorders like depression will need to interface with more neurons.

Neuralace is designed to capitalize on this need; with 10,000+ channels and the entire scalable system integrated on an extremely flexible lace-structured chip, it could capture data that is orders of magnitude greater than existing electrodes, allowing for an exponential increase in capability and intuitiveness.