Toggle light / dark theme

Chinese lab crafts mutant COVID-19 strain with 100% kill rate in ‘humanized’ mice: ‘Surprisingly’ rapid death

Factor in along w/ weird stories of secret labs in places like California.


GX_P2V had infected the lungs, bones, eyes, tracheas and brains of the dead mice, the last of which was severe enough to ultimately cause the death of the animals.

In the days before their deaths, the mice had quickly lost weight, exhibited a hunched posture, and moved extremely sluggishly.

Most eerie of all, their eyes turned completely white the day before they died.

Minds in Machines: Comparing Biological and Synthetic Intelligence

The incredible explosion in the power of artificial intelligence is evident in daily headlines proclaiming big breakthroughs. What are the remaining differences between machine and human intelligence? Could we simulate a brain on current computer hardware if we could write the software? What are the latest advancements in the world’s largest brain model? Participate in the discussion about what AI has done and how far it has yet to go, while discovering new technologies that might allow it to get there.

ABOUT THE SPEAKERS

CHRIS ELIASMITH is the Director of the Centre for Theoretical Neuroscience (CTN) at the University of Waterloo. The CTN brings together researchers across many faculties who are interested in computational and theoretical models of neural systems. Dr Eliasmith was recently elected to the new Royal Society of Canada College of New Scholars, Artists and Scientists, one of only 90 Canadian academics to receive this honour. He is also a Canada Research Chair in Theoretical Neuroscience. His book, ‘How to build a brain’ (Oxford, 2013), describes the Semantic Pointer Architecture for constructing large-scale brain models. His team built what is currently the world’s largest functional brain model, ‘Spaun’, and the first to demonstrate realistic behaviour under biological constraints. This ground-breaking work was published in Science (November, 2012) and has been featured by CNN, BBC, Der Spiegel, Popular Science, National Geographic and CBC among many other media outlets, and was awarded the NSERC Polayni Prize for 2015.

PAUL THAGARD is a philosopher, cognitive scientist, and author of many interdisciplinary books. He is Distinguished Professor Emeritus of Philosophy at the University of Waterloo, where he founded and directed the Cognitive Science Program. He is a graduate of the Universities of Saskatchewan, Cambridge, Toronto (PhD in philosophy) and Michigan (MS in computer science). He is a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council has awarded him a Molson Prize (2007) and a Killam Prize (2013). His books include: The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change (MIT Press, 2012); The Brain and the Meaning of Life (Princeton University Press, 2010); Hot Thought: Mechanisms and Applications of Emotional Cognition (MIT Press, 2006); and Mind: Introduction to Cognitive Science (MIT Press, 1996; second edition, 2005). Oxford University Press will publish his 3-book Treatise on Mind and Society in early 2019.

Date/Time:
Wednesday, October 17, 2018 — 7:30pm.
Location:
Vanstone Lecture Hall, St. Jerome’s University Academic Centre.

Re-frame of mind: Do our brains have a built-in sense of ‘grammar’?

In a new paper published in Nature Neuroscience, Yale Department of Psychiatry’s George Dragoi, MD, PhD, describes how the brain forms a mcellular framework early in development which helps to define who we are and how we process experiences.


Based on years of research, Yale’s George Dragoi argues that our brains develop a cellular template soon after birth that defines how we perceive the world.

Redefining Brain Function: Physicists Overturn Long-Standing Assumptions

Recent research suggests that a number of neuronal characteristics, traditionally believed to stem from the cell body or soma, may actually originate from processes in the dendrites. This discovery has significant implications for the study of degenerative diseases and for understanding the different states of brain activity during sleep and wakefulness.

The brain is an intricate network comprising billions of neurons. Each neuron’s cell body, or soma, engages in simultaneous communication with thousands of other neurons through its synapses. These synapses act as links, facilitating the exchange of information. Additionally, each neuron receives incoming signals through its dendritic trees, which are highly branched and extend for great lengths, resembling the structure of a complex and vast arboreal network.

For the last 75 years, a core hypothesis of neuroscience has been that the basic computational element of the brain is the neuronal soma, where the long and ramified dendritic trees are only cables that enable them to collect incoming signals from its thousands of connecting neurons. This long-lasting hypothesis has now been called into question.

I’ve Researched Time for 15 Years—Here’s How My Perception of It Has Changed

Time is one of those things that most of us take for granted. We spend our lives portioning it into work-time, family-time, and me-time. Rarely do we sit and think about how and why we choreograph our lives through this strange medium. A lot of people only appreciate time when they have an experience that makes them realize how limited it is.

My own interest in time grew from one of those “time is running out” experiences. Eighteen years ago, while at university, I was driving down a country lane when another vehicle strayed onto my side of the road and collided with my car. I can still vividly remember the way in which time slowed down, grinding to a near halt, in the moments before my car impacted with the oncoming vehicle. Time literally seemed to stand still. The elasticity of time and its ability to wax and wane in different situations shone out like never before. From that moment I was hooked.

I have spent the last 15 years trying to answer questions such as: Why does time slow down in near-death situations? Does time really pass more quickly as you get older? How do our brains process time?

This Graphene-Based Brain Implant Can Peer Deep Into the Brain From Its Surface

Finding ways to reduce the invasiveness of brain implants could greatly expand their potential applications. A new device tested in mice that sits on the brain’s surface—but can still read activity deep within—could lead to safer and more effective ways to read neural activity.

There are already a variety of technologies that allow us to peer into the inner workings of the brain, but they all come with limitations. Minimally invasive approaches include functional MRI, where an MRI scanner is used to image changes of blood flow in the brain, and EEG, where electrodes placed on the scalp are used to pick up the brain’s electrical signals.

The former requires the patient to sit in an MRI machine though, and the latter is too imprecise for most applications. The gold standard approach involves inserting electrodes deep into brain tissue to obtain the highest quality readouts. But this requires a risky surgical procedure, and scarring and the inevitable shifting of the electrodes can lead to the signal degrading over time.

Top 10 Neuroscience News Articles of 2023

Summary: The year 2023 witnessed groundbreaking discoveries in neuroscience, offering unprecedented insights into the human brain.

From animal-free brain organoids to the effects of optimism on cognitive skills, these top 10 articles have unveiled the mysteries of the mind.

Research revealed the risks of dietary trends like “dry scooping” and the impact of caffeine on brain plasticity. Additionally, the year showcased the potential of mushroom compounds for memory enhancement and the unexpected influence of virtual communication on brain activity.

Architecture All Access: Neuromorphic Computing Part 2

In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Jump to Chapters:
0:00 Welcome to Neuromorphic Computing.
0:30 How to architect a chip that behaves like a brain.
1:29 Advantages of CMOS semiconductor manufacturing technology.
2:18 Objectives in our design toolbox.
2:36 Sparse distributed asynchronous communication.
4:51 Reaching the level of efficiency and density of the brain.
6:34 Loihi 2 a fully digital chip implemented in a standard CMOS process.
6:57 Asynchronous vs Synchronous.
7:54 Function of the core’s memory.
8:13 Spikes and Table Lookups.
9:24 Loihi learning process.
9:45 Learning rules, input and the network.
10:12 The challenge of architecture and programming today.
10:45 Recent publications to read.

Architecture all access season 2 playlist — • architecture all access season 2

Intel Wireless Technology — https://intel.com/wireless.

Karlheinz Meier — Neuromorphic Computing — Extreme Approaches to weak and strong scaling

Computer simulations of complex systems provide an opportunity to study their time evolution under user control. Simulations of neural circuits are an established tool in computational neuroscience. Through systematic simplification on spatial and temporal scales they provide important insights in the time evolution of networks which in turn leads to an improved understanding of brain functions like learning, memory or behavior. Simulations of large networks are exploiting the concept of weak scaling where the massively parallel biological network structure is naturally mapped on computers with very large numbers of compute nodes. However, this approach is suffering from fundamental limitations. The power consumption is approaching prohibitive levels and, more seriously, the bridging of time-scales from millisecond to years, present in the neurobiology of plasticity, learning and development is inaccessible to classical computers. In the keynote I will argue that these limitations can be overcome by extreme approaches to weak and strong scaling based on brain-inspired computing architectures.

Bio: Karlheinz Meier received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 15 year effort to design, build and operate an electronics data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson. Following scientific staff positions at DESY and CERN he was appointed full professor of physics at Heidelberg university in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA). Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as tools for neuroscience and generic methods for inference from large data volumes.

/* */