Toggle light / dark theme

The Internal Ramp Theory for the Great Pyramid of Egypt is one of the most interesting ideas ever proposed for its construction. French architect Jean-Pierre Houdin has spent more than 20 years developing and refining this idea.

In October of 2022, Houdin published an update to his theory which reflects the ScanPyramids findings from the past six years. The ScanPyramids ‘Big Void’ is an intriguing clue that Houdin may be correct with his notion of the Grand Gallery being used as a counterweight ramp for the largest pyramid stones.

The ‘Big Void’ may be another Grand Gallery-like space which could be used for the same purpose. Institutional Egyptology remains unreceptive to Houdin’s publications, nor the extremely confident results from the ScanPyramids mission.

Memories are stored in all different areas across the brain as networks of neurons called engrams. In addition to collecting information about incoming stimuli, these engrams capture emotional information. In a new study, Steve Ramirez, a neuroscientist at Boston University, discovered where the brain stores positive and negative memories and uncovered hundreds of markers that differentiate positive-memory neurons from negative-memory neurons.

In 2019, Ramirez found evidence that good and bad memories are stored in different regions of the hippocampus, a cashew-shaped structure that holds sensory and emotional information necessary for forming and retrieving memories. The top part of the hippocampus activated when mice underwent enjoyable experiences, but the bottom region activated when they had negative experiences.

His team also found that they could manipulate memories by activating these regions. When he and his team activated the top area of the hippocampus, bad memories were less traumatic. Conversely, when they activated the bottom part, mice exhibited signs of long-last lasting anxiety-related behavioral changes. Ramirez suspected this difference in effect was because the neurons that store good and bad memories have different functions beyond simply keeping positive and negative emotions. However, before he could unravel this difference, he needed to identify which cells were storing good and bad memories. The results were published in the journal Communications Biology.

Researchers at the University of Colorado Anschutz find that the overexpression of a gene improves learning and memory in Alzheimer’s.

Alzheimer’s disease is a disease that attacks the brain, causing a decline in mental ability that worsens over time. It is the most common form of dementia and accounts for 60 to 80 percent of dementia cases. There is no current cure for Alzheimer’s disease, but there are medications that can help ease the symptoms.

Nothing in biology makes sense except in the light of evolution. The gradualism of evolution has explained and dissolved life’s mysteries—life’s seemingly irreducible complexity and the illusion that living things possess some sort of mysterious vitalizing essence. So, too, evolution is likely to be key to demystifying the seemingly inexplicable, ethereal nature of consciousness.

First, what does it even mean to say that “Nothing in biology makes sense except in the light of evolution”? If the chosen topic is human consciousness, Martin Luther King and Mother Teresa come quickly to mind. But then what does the term “evolution” contribute to the discussion of the origin of human consciousness? Is it something useful or something theorists are stuck with, come what may?

Science theories should make predictions. Who predicted either King or Mother Teresa?

Russian scientists from University of Science and Technology MISIS and Bauman Moscow State Technical University were one of the first in the world to implement a two-qubit operation using superconducting fluxonium qubits. Fluxoniums have a longer life cycle and a greater precision of operations, so they are used to make longer algorithms. An article on research that brings the creation of a quantum computer closer to reality has been published in npj Quantum Information.

One of the main questions in the development of a universal quantum computer is about . Namely, which quantum objects are the best to make processors for quantum computers: electrons, photons, ions, superconductors, or other “quantum transistors.” Superconducting qubits have become one of the most successful platforms for quantum computing during the past decade. To date, the most commercially successful superconducting qubits are transmons, which are actively investigated and used in the quantum developments of Google, IBM and other world leading laboratories.

The main task of a qubit is to store and process information without errors. Accidental noise and even mere observation can lead to the loss or alteration of data. The stable operation of often requires extremely low ambient temperatures—close to zero Kelvin, which is hundreds of times colder than the temperature of open space.

The methods currently used to correct systematic issues in NLP models are either fragile or time-consuming and prone to shortcuts. Humans, on the other hand, frequently reprimand one another using natural language. This inspired recent research on natural language patches, which are declarative statements that enable developers to deliver corrective feedback at the appropriate level of abstraction by either modifying the model or adding information the model may be missing.

Instead of relying solely on labeled examples, there is a growing body of research on using language to provide instructions, supervision, and even inductive biases to models, such as building neural representations from language descriptions (Andreas et al., 2018; Murty et al., 2020; Mu et al., 2020), or language-based zero-shot learning (Brown et al., 2020; Hanjie et al., 2022; Chen et al., 2021). For corrective purposes, when the user interacts with an existing model to enhance it, language has yet to be properly utilized.

The neural language patching model has two heads: a gating head that determines if a patch should be applied and an interpreter head that forecasts results based on the information in the patch. The model is trained in two steps: first on a tagged dataset and then through task-specific fine-tuning. A set of patch templates are used to create patches and synthetic labeled samples during the second fine-tuning step.

This video is about, Boost Your Brain 150% With An AI Chip From Elon Musk! In 2023.

Remember the movie Limitless, now you can do it with a computer chip.

How often have you thought of eating all your notes so you can memorize them? Or how many times have you wished you could read others or take a photo from your eyes? Thanks to Elon musk; some of these childhood wishes might come to life.

For this, you will only need a small brain chip!

Get a Wonderful Person Tee: https://teespring.com/stores/whatdamath.
More cool designs are on Amazon: https://amzn.to/3wDGy2i.
Alternatively, PayPal donations can be sent here: http://paypal.me/whatdamath.

Hello and welcome! My name is Anton and in this video, we will talk about incredible discoveries about the human brain.
Links:
https://www.pnas.org/doi/full/10.1073/pnas.2204900119
https://www.nature.com/articles/s41586-022-05277-w.
https://en.wikipedia.org/wiki/ARHGAP11B
https://www.scienceinpublic.com.au/corticallabs.
https://www.nature.com/articles/s41586-019-1654-9
Synthetic cells: https://youtu.be/OxVZPKmm58M
#brain #biology #neuroscience.

Support this channel on Patreon to help me make this a full time job:
https://www.patreon.com/whatdamath.

Bitcoin/Ethereum to spare? Donate them here to help this channel grow!

Thank you to Wondrium for sponsoring today’s video! Signup for your FREE trial to Wondrium here: http://ow.ly/3bA050L1hTL

Researched and Written by Jon Farrow.
Narrated and Edited by David Kelly.
Animations by Jero Squartini https://www.fiverr.com/share/0v7Kjv.
Laniakea animation by Alperaym.
Incredible thumbnail art by Ettore Mazza, the GOAT: https://www.instagram.com/ettore.mazza/?hl=en.

Huge thanks to Daniel Pomarède for the use of his images of Laniakea and our local cosmological neighborhood: https://twitter.com/danielpomarede.

Thank to Pablo Carlos Budassi for his wonderful images of the KBC Void, Shapley Supercluster and Bootes Void.