Toggle light / dark theme

TWITTER https://twitter.com/Transhumanian.
PATREON https://www.patreon.com/transhumania.
BITCOIN 14ZMLNppEdZCN4bu8FB1BwDaxbWteQKs8i.
BITCOIN CASH 1LhXJjN4FrfJh8LywR3dLG2uGXSaZjey9f.
ETHEREUM 0x1f89b261562C8D4C14aA01590EB42b2378572164
LITECOIN LdB94n8sTUXBto5ZKt82YhEsEmxomFGz3j.
CHAINLINK 0xDF560E12fF416eC2D4BAECC66E323C56af2f6666.

KEYWORDS:
Scienc, Technology, Philosophy, Futurism, Simulation, Simulationism, Ockham’s Razor, Argument, Hypothesis, Anthropic Principle, Holographic Principle, Holographic Universe theory, Brain in a Vat, Brain in a Jar, Matrix, Inception, Hologram, Artificial Intelligence, Vocaloid Hologram, Reality, Ontology, Epistemology, Elon Musk, Solipsism, Illusion, Renee Descartes, George Berkeley, Materialism, Idealism, Solipsism, Cogito Ergo Sum, Esse es Percepi, Gilbert Harman, Hillary Putnam, Robert Nozick, The Experience Machine, Multiverse, Omniverse, Black Hole Hologram, Event Horizon, Singularity, Moore’s Law, Black Dual Linear Error Correcting Code, Jim James Sylvester Gates, Neil DeGrasse Tyson, Plato’s Cave, Quantum Mechanics, Schrondinger’s Cat, Observer Effect, Double-Slit Experiment, Heisenberg Uncertainty Principle, Artificial Intelligence, Quantum Superdeterminism, Free Will, Albert Einstein, Stephen Hawking, Quantum Gravity Research, Eugene Vignor, Consciousness, Fermi Paradox, SETI (search for extraterrestrial intelligence), Drake Equation, Alpha Centauri, Wave Function Collapse, Video Games, VR, Virtual Reality, God, Theology, Fine-Tuning Argument, Teleological Argument, Simulationism, Mormonism, LDS, Heavenly Father, Glitch, Dyson Sphere.

Deep dive into the nature of consciousness and reality.

Sponsors: https://brilliant.org/TOE for 20% off. For Algo’s podcast https://www.youtube.com/channel/UC9IfRw1QaTglRoX0sN11AQQ and website https://www.algo.com/.

Patreon: https://patreon.com/curtjaimungal.
Crypto: https://tinyurl.com/cryptoTOE
PayPal: https://tinyurl.com/paypalTOE
Twitter: https://twitter.com/TOEwithCurt.
Discord Invite: https://discord.com/invite/kBcnfNVwqs.
iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-wit…1521758802
Pandora: https://pdora.co/33b9lfP
Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e.
Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything.
Merch: https://tinyurl.com/TOEmerch.

LINKS MENTIONED:
–QBism Paper: https://arxiv.org/abs/1003.5209
–Donald Hoffman’s book The Case Against Reality (affiliate): https://amzn.to/34eWmxz.
–Plato and the Nerd (affiliate): https://amzn.to/34GMexr.

TIMESTAMPS:
00:00:00 Introduction.
00:03:34 Is a Theory of Everything possible? / Definition of Consciousness.
00:08:32 Spacetime’s fundamental nature (or not)
00:14:27 Joscha Bach on mysterianism, telepathy, and consciousness.
00:34:40 Joscha has a way of interpreting the Bible literally.
00:42:01 Physical world vs Computational world.
00:57:57 On Gödel and changing the definition of truth to provable / computable.
01:12:33 What parts of the mind makes statements beyond computation?
01:13:57 Real numbers don’t exist?
01:15:23 [Prof. Edward Lee] Reality is not necessarily algorithmic.
01:34:02 Donald Hoffman on Free Will.
01:44:03 Joscha Bach on Free Will and whether a TOE exists.
01:57:10 What would change in Bach’s model if classical logic was correct?
02:07:42 Penrose and Lucas argument regarding Gödel and the mind.
02:13:55 Closing thoughts from Bach and Hoffman on each other’s work.

Just wrapped (April 2021) a documentary called Better Left Unsaid http://betterleftunsaidfilm.com on the topic of “when does the left go too far?” Visit that site if you’d like to watch it.

Over the past decade, digital cameras have been widely adopted in various aspects of our society, and are being massively used in mobile phones, security surveillance, autonomous vehicles, and facial recognition. Through these cameras, enormous amounts of image data are being generated, which raises growing concerns about privacy protection.

Some existing methods address these concerns by applying algorithms to conceal sensitive information from the acquired images, such as image blurring or encryption. However, such methods still risk exposure of sensitive data because the raw images are already captured before they undergo digital processing to hide or encrypt the sensitive information. Also, the computation of these algorithms requires additional power consumption. Other efforts were also made to seek solutions to this problem by using customized cameras to downgrade the image quality so that identifiable information can be concealed. However, these approaches sacrifice the overall for all the objects of interest, which is undesired, and they are still vulnerable to adversarial attacks to retrieve the that is recorded.

A new research paper published in eLight demonstrated a new paradigm to achieve privacy-preserving imaging by building a fundamentally new type of imager designed by AI. In their paper, UCLA researchers, led by Professor Aydogan Ozcan, presented a smart design that images only certain types of desired objects, while instantaneously erasing other types of objects from its images without requiring any digital processing.

This week our guest is NBC technology correspondent, Jacob Ward, who recently released his book, The Loop: How Technology Is Creating a World Without Choices and How to Fight Back. In this episode we focus broadly on the ways in which technology and AI are learning from the worst instincts of human beings, and then using those bad behaviors to shape our future choices. As a result, Jacob suggests this creates feedback loops of increasingly limited and increasingly short-sighted behavior. This conversation includes exploring topics such as big data, bad incentives for programmers, profit motives, historical bias reflected in data, system 1 vs system 2 thinking, and much more.

Find out more about Jacob at jacobward.com or follow him on Twitter at twitter.com/byjacobward ** Host: Steven Parton — LinkedIn / Twitter Music by: Amine el Filali.

54 MINS

A group of scientists from the University College London has developed an artificial intelligence (AI) algorithm that can detect drug-resistant focal cortical dysplasia (FCD), a subtle anomaly in the brain that leads to epileptic seizures. This is a promising step for scientists toward detecting and curing epilepsy in its early stages.

To develop the algorithm, the Multicentre Epilepsy Lesion Detection project (MELD) gathered more than 1,000 patients’ MRI scans from 22 international epilepsy centers, which reports where anomalies are in cases of drug-resistant focal cortical dysplasia (FCD), a major reason behind epilepsy.

Around the same time, neuroscientists developed the first computational models of the primate visual system, using neural networks like AlexNet and its successors. The union looked promising: When monkeys and artificial neural nets were shown the same images, for example, the activity of the real neurons and the artificial neurons showed an intriguing correspondence. Artificial models of hearing and odor detection followed.

But as the field progressed, researchers realized the limitations of supervised training. For instance, in 2017, Leon Gatys, a computer scientist then at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T, then overlaid a leopard skin pattern across the photo, generating a bizarre but easily recognizable image. A leading artificial neural network correctly classified the original image as a Model T, but considered the modified image a leopard. It had fixated on the texture and had no understanding of the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans don’t label the data. Rather, “the labels come from the data itself,” said Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the blanks. In a so-called large language model, for instance, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a massive corpus of text gleaned from the internet, the model appears to learn the syntactic structure of the language, demonstrating impressive linguistic ability — all without external labels or supervision.

Vanderbilt researchers have developed an active machine learning approach to predict the effects of tumor variants of unknown significance, or VUS, on sensitivity to chemotherapy. VUS, mutated bits of DNA with unknown impacts on cancer risk, are constantly being identified. The growing number of rare VUS makes it imperative for scientists to analyze them and determine the kind of cancer risk they impart.

Traditional prediction methods display limited power and accuracy for rare VUS. Even machine learning, an artificial intelligence tool that leverages data to “learn” and boost performance, falls short when classifying some VUS. Recent work by the lab of Walter Chazin, Chancellor’s Chair in Medicine and professor of biochemistry and chemistry, led by co-first authors and postdoctoral fellows Alexandra Blee and Bian Li, featured an active machine learning technique.

Active machine learning relies on training an algorithm with existing data, as with machine learning, and feeding it new information between rounds of training. Chazin and his lab identified VUS for which predictions were least certain, performed biochemical experiments on those VUS and incorporated the resulting data into subsequent rounds of algorithm training. This allowed the model to continuously improve its VUS classification.

In Einstein’s theory of general relativity, gravity arises when a massive object distorts the fabric of spacetime the way a ball sinks into a piece of stretched cloth. Solving Einstein’s equations by using quantities that apply across all space and time coordinates could enable physicists to eventually find their “white whale”: a quantum theory of gravity.

In a new article in The European Physical Journal H 0, Donald Salisbury from Austin College in Sherman, USA, explains how Peter Bergmann and Arthur Komar first proposed a way to get one step closer to this goal by using Hamilton-Jacobi techniques. These arose in the study of particle motion in order to obtain the complete set of solutions from a single function of particle position and constants of the motion.

Three of the four —strong, weak, and electromagnetic—hold under both the ordinary world of our everyday experience, modeled by , and the spooky world of quantum physics. Problems arise, though, when trying to apply to the fourth force, gravity, to the quantum world. In the 1960s and 1970s, Peter Bergmann of Syracuse University, New York and his associates recognized that in order to someday reconcile Einstein’s of with the quantum world, they needed to find quantities for determining events in space and time that applied across all frames of reference. They succeeded in doing this by using the Hamilton-Jacobi techniques.