Toggle light / dark theme

Hi all.


Up until now, chip-makers have been piggybacking on the renowned Moore’s Law for delivering successive generations of chips that have more compute capabilities and are less power hungry. Now, these advancements are slowly coming to a halt. Researchers around the world are proposing alternative architectures to continue producing systems which are faster and more energy efficient. This article discusses those alternatives and reasons why one of them might have an edge over others in averting the chip design industry from getting stymied.

Moore’s law, or to put it differently — savior of chip-makers worldwide — was coined by Dr. Gordon Moore, the founder of Intel Corp, in 1965. The law states that the number of transistors on a chip would double every 2 years. But why the savior of chip-makers? This law was so powerful during the semiconductor boom that “people would auto-buy the next latest and greatest computer chip, with full confidence that it would be better than what they’ve got”, said former Intel engineer Robert P. Colwell. Back in the day writing a program with bad performance was not an issue as the programmer knew that Moore’s law would ultimately save him.

Problem that we are facing today is, the law is nearly dead! Or to avert from offending Moore fans — as Henry Samueli, chief technology officer for Broadcom says.

A new mouse study highlights the proteins responsible for LC3-associated endocytosis (LANDO), an autophagy process that is involved in degrading β-amyloid, the principal substance associated with Alzheimer’s disease.

Proteostasis

Proteins in the human brain can form misfolded, non-functional, and toxic clumps known as aggregates. Preventing these aggregates from forming, and removing them when they do, is a natural function of the human body, and it is known as proteostasis. However, as we age, this function degrades, and loss of proteostasis is one of the hallmarks of aging. The resulting accumulation of aggregates leads to several deadly diseases, one of which is Alzheimer’s.

Flashback to 2 years ago…


Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article “‘Who’ is Saying ‘What’? Brain-Based Decoding of Human Voice and Speech,” the four authors demonstrate that speech sounds and voices can be identified by means of a unique ‘neural fingerprint’ in the listener’s brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.

The story behind the writing of Frankenstein is famous. In 1816, Mary Shelley and Percy Bysshe Shelley, summering near Lake Geneva in Switzerland, were challenged by Lord Byron to take part in a competition to write a frightening tale. Mary, only 18 years old, later had a waking dream of sorts where she imagined the premise of her book:

When I placed my head on my pillow, I did not sleep, nor could I be said to think. My imagination, unbidden, possessed and guided me, gifting the successive images that arose in my mind with a vividness far beyond the usual bounds of reverie. I saw — with shut eyes, but acute mental vision, — I saw the pale student of unhallowed arts kneeling beside the thing he had put together. I saw the hideous phantasm of a man stretched out, and then, on the working of some powerful engine, show signs of life, and stir with an uneasy, half vital motion.

This became the kernel of Frankenstein; or, The Modern Prometheus, the novel first published in London in 1818, with only 500 copies put in circulation.

Researchers from Lund University, together with the Roche pharmaceutical company, have developed a method to create a new blood marker capable of detecting whether or not a person has Alzheimer’s disease. If the method is approved for clinical use, the researchers hope eventually to see it used as a diagnostic tool in primary healthcare. This autumn, they will start a trial in primary healthcare to test the technique.

Currently, a major support in the diagnostics of Alzheimer’s disease is the identification of abnormal accumulation of the substance beta-amyloid, which can be detected either in a spinal fluid sample or through brain imaging using a PET scanner.

“These are expensive methods that are only available in specialist healthcare. In research, we have therefore long been searching for simpler diagnostic tools,” says Sebastian Palmqvist, associate professor at the unit for clinical memory research at Lund University, physician at Skåne University Hospital and lead author of the study.

All human experience is rooted in the brain, but we just barely understand how it works. That’s partially because it’s hard to study: Scientists can’t just run experiments on living brains, and experiments on animal brains don’t always translate to humans. That’s why researchers developed the brain organoid, an artificially grown, three-dimensional cluster of human neurons that faithfully mimics brain development — and, as Japanese scientists reported Wednesday in Cell Stem Cell, the neural activity of a living brain as well.

Neurons in a living brain respond to stimuli by “firing” off electrical impulses, which they use to communicate with one another and with other parts of the body. The scientists behind the new paper discovered that the brain organoids they grew from scratch in their lab also started to exhibit synchronized activity, just like neurons in an actual brain. That team included first and co-corresponding author Hideya Sakaguchi, Ph.D., a postdoctoral fellow at Kyoto University currently at the Salk Institute.

“I was very excited to see some of the neurons activated at the same time robustly at first,” Sakaguchi, who did the first of his experiments in December 2016, tells Inverse. “Neurons first show individual activities, but as they form networks and connections between other neurons, they start to show synchronized activities.”