Toggle light / dark theme

This week our guest is Meghan O’Gieblyn, who has written regularly for entities such as Wired, The New York Times, and The Guardian, in addition to authoring books such as Interior States and her latest book: God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning.

Interestingly, much of Meghan’s work pulls on her experience losing her faith in religion while simultaneously being drawn into transhumanism from reading the Age of Spiritual Machines by Singularity’s very own Ray Kurzweil. This exploration of Meghan’s background and her latest book takes us on a journey through the ways in which technology and spirituality have historically woven together, the current ways in which they are conflicting, and the future philosophical questions we’re going to be forced to reconcile. For those of you interested in this subject, I highly recommend going and listening to episode 52 with Micah Redding, which lays a lot of the foundation that we build on her in this episode.

Find out more about Meghan through her website meghanogieblyn.com, or find her book on Amazon.

Host: steven parton — linkedin / twitter.

00:00 Trailer.
05:54 Tertiary brain layer.
19:49 Curing paralysis.
23:09 How Neuralink works.
33:34 Showing probes.
44:15 Neuralink will be wayyy better than prior devices.
1:01:20 Communication is lossy.
1:14:27 Hearing Bluetooth, WiFi, Starlink.
1:22:50 Animal testing & brain proxies.
1:29:57 Controlling muscle units w/ Neuralink.

I had the privilege of speaking with James Douma-a self-described deep learning dork. James’ experience and technical understanding are not easily found. I think you’ll find his words to be intriguing and insightful. This is one of several conversations James and I plan to have.

We discuss:
1. Elon’s motivations for starting Neuralink.
2. How Neuralinks will be implanted.
3. Things Neuralink will be able to do.
4. Important takeaways from the latest Show and Tell event.

In future episodes, we’ll dive more into:

Samsung Electronics has had an eye on the consumer-grade robotics niche for a while now, and during CES 2023, the company said it views robots as “a new growth engine.” But beyond releasing smart vacuum cleaners, Samsung’s more ambitious AI-powered prototype robots haven’t truly materialized. The tech giant plans to change this before the end of the year.

“We plan to release a human assistant robot called EX1 within this year,” vice chairman and CEO of Samsung Electronics, Han Jong-hee, said at a press conference in Las Vegas. (via Pulse)

The company already has a device under its belt called “EX1,” which is a decade-old digital camera. Evidently, the new EX1 coming this year would be a completely different kind of product, i.e., a “human assistant robot,” albeit its capabilities remain unknown. However, past concept robots presented by Samsung at CES may hold some clues.

Portable, low-field strength MRI systems have the potential to transform neuroimaging – provided that their low spatial resolution and low signal-to-noise (SNR) ratio can be overcome. Researchers at Harvard Medical School are harnessing artificial intelligence (AI) to achieve this goal. They have developed a machine learning super-resolution algorithm that generates synthetic images with high spatial resolution from lower resolution brain MRI scans.

The convolutional neural network (CNN) algorithm, known as LF-SynthSR, converts low-field strength (0.064 T) T1-and T2-weighted brain MRI sequences into isotropic images with 1 mm spatial resolution and the appearance of a T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) acquisition. Describing their proof-of-concept study in Radiology, the researchers report that the synthetic images exhibited high correlation with images acquired by 1.5 T and 3.0 T MRI scanners.

Morphometry, the quantitative size and shape analysis of structures in an image, is central to many neuroimaging studies. Unfortunately, most MRI analysis tools are designed for near-isotropic, high-resolution acquisitions and typically require T1-weighted images such as MP-RAGE. Their performance often drops rapidly as voxel size and anisotropy increase. As the vast majority of existing clinical MRI scans are highly anisotropic, they cannot be reliably analysed with existing tools.

Just published from my son.

Automatic hippocampus imaging, with about 20 minutes of cloud computing per scan.


Like neocortical structures, the archicortical hippocampus differs in its folding patterns across individuals. Here, we present an automated and robust BIDS-App, HippUnfold, for defining and indexing individual-specific hippocampal folding in MRI, analogous to popular tools used in neocortical reconstruction. Such tailoring is critical for inter-individual alignment, with topology serving as the basis for homology. This topological framework enables qualitatively new analyses of morphological and laminar structure in the hippocampus or its subfields. It is critical for refining current neuroimaging analyses at a meso-as well as micro-scale. HippUnfold uses state-of-the-art deep learning combined with previously developed topological constraints to generate uniquely folded surfaces to fit a given subject’s hippocampal conformation. It is designed to work with commonly employed sub-millimetric MRI acquisitions, with possible extension to microscopic resolution. In this paper, we describe the power of HippUnfold in feature extraction, and highlight its unique value compared to several extant hippocampal subfield analysis methods.

Keywords: Brain Imaging Data Standards; computational anatomy; deep learning; hippocampal subfields; hippocampus; human; image segmentation; magnetic resonance imaging; neuroscience.