Toggle light / dark theme

In the last week, I’ve been experimenting with the hot new version of ChatGPT to discover how it might conserve a leader’s scarcest resource: time. When OpenAI launched the AI chatbot at the end of November, it instantly attracted millions of users, with breathless predictions of its potential to disrupt business models and jobs.

It certainly promises to deliver on a prediction I made in 2019 in my book The Human Edge, which explores the skills needed in a world of artificial intelligence and digitization. I forecasted: “…AI can offer us more free time by automating the stupid stuff we currently have to do, thereby reducing our cognitive burden.”


This new chatbot can help time-poor managers by writing emails and talking points — but also in delivering complex tasks like HR performance reviews.

Artificial Intelligence is not the future. It is here today or has been for a long time — depending on who you ask. As we enter 2023, it is not enough to say that 2023 is the “year of AI” — the past few years have all been the “year of AI”. I believe 2023 is the year of AI Education.

What is AI Education? I have previously written articles about AI-Literacy, and the need for everyone in the world to understand AI at some level. AI Education is the process of becoming AI Literate.


Why is 2023 the year of AI Education? This post shows why it should be and why it can be.

Fake scientific abstracts and research papers generated using OpenAI’s highly-advanced chatbox ChatGPT fooled scientists into thinking they were real reports nearly one-third of the time, according to a new study, as the eerily human-like program raises eyebrows over the future of artificial intelligence.

Researchers at Northwestern University and the University of Chicago instructed ChatGPT to generate fake research abstracts based on 10 real ones published in medical journals, and fed the fakes through two detection programs that attempted to distinguish them from real reports.


ChatGPT created completely original scientific abstracts based on fake numbers, and stumped reviewers nearly one-third of the time.

OpenAI this week signaled it’ll soon begin charging for ChatGPT, its viral AI-powered chatbot that can write essays, emails, poems and even computer code. In an announcement on the company’s official Discord server, OpenAI said that it’s “starting to think about how to monetize ChatGPT” as one of the ways to “ensure [the tool’s] long-term viability.”

The monetized version of ChatGPT will be called ChatGPT Professional, apparently. That’s according to a waitlist link OpenAI posted in the Discord server, which asks a range of questions about payment preferences including “At what price (per month) would you consider ChatGPT to be so expensive that you would not consider buying it?”

The waitlist also outlines ChatGPT Professional’s benefits, which include no “blackout” (i.e. unavailability) windows, no throttling and an unlimited number of message with ChatGPT — “at least 2x the regular daily limit.” OpenAI says that those who fill out the waitlist form may be selected to pilot ChatGPT Professional, but that the program is in the experimental stages and won’t be made widely available “at this time.”

00:00 Trailer.
05:54 Tertiary brain layer.
19:49 Curing paralysis.
23:09 How Neuralink works.
33:34 Showing probes.
44:15 Neuralink will be wayyy better than prior devices.
1:01:20 Communication is lossy.
1:14:27 Hearing Bluetooth, WiFi, Starlink.
1:22:50 Animal testing & brain proxies.
1:29:57 Controlling muscle units w/ Neuralink.

I had the privilege of speaking with James Douma-a self-described deep learning dork. James’ experience and technical understanding are not easily found. I think you’ll find his words to be intriguing and insightful. This is one of several conversations James and I plan to have.

We discuss:
1. Elon’s motivations for starting Neuralink.
2. How Neuralinks will be implanted.
3. Things Neuralink will be able to do.
4. Important takeaways from the latest Show and Tell event.

In future episodes, we’ll dive more into:
- Neuralink’s architectural decisions and plans to scale.
- The spike detection, decoding algorithms, and differences among brain regions.
- Robot/ hardware/ manufacturing.
- Neural shunt concept/ future projects.

Hope you enjoy it as much as I did.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

Samsung Electronics has had an eye on the consumer-grade robotics niche for a while now, and during CES 2023, the company said it views robots as “a new growth engine.” But beyond releasing smart vacuum cleaners, Samsung’s more ambitious AI-powered prototype robots haven’t truly materialized. The tech giant plans to change this before the end of the year.

“We plan to release a human assistant robot called EX1 within this year,” vice chairman and CEO of Samsung Electronics, Han Jong-hee, said at a press conference in Las Vegas. (via Pulse)

The company already has a device under its belt called “EX1,” which is a decade-old digital camera. Evidently, the new EX1 coming this year would be a completely different kind of product, i.e., a “human assistant robot,” albeit its capabilities remain unknown. However, past concept robots presented by Samsung at CES may hold some clues.

Portable, low-field strength MRI systems have the potential to transform neuroimaging – provided that their low spatial resolution and low signal-to-noise (SNR) ratio can be overcome. Researchers at Harvard Medical School are harnessing artificial intelligence (AI) to achieve this goal. They have developed a machine learning super-resolution algorithm that generates synthetic images with high spatial resolution from lower resolution brain MRI scans.

The convolutional neural network (CNN) algorithm, known as LF-SynthSR, converts low-field strength (0.064 T) T1-and T2-weighted brain MRI sequences into isotropic images with 1 mm spatial resolution and the appearance of a T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) acquisition. Describing their proof-of-concept study in Radiology, the researchers report that the synthetic images exhibited high correlation with images acquired by 1.5 T and 3.0 T MRI scanners.

Morphometry, the quantitative size and shape analysis of structures in an image, is central to many neuroimaging studies. Unfortunately, most MRI analysis tools are designed for near-isotropic, high-resolution acquisitions and typically require T1-weighted images such as MP-RAGE. Their performance often drops rapidly as voxel size and anisotropy increase. As the vast majority of existing clinical MRI scans are highly anisotropic, they cannot be reliably analysed with existing tools.

Just published from my son.

Automatic hippocampus imaging, with about 20 minutes of cloud computing per scan.


Like neocortical structures, the archicortical hippocampus differs in its folding patterns across individuals. Here, we present an automated and robust BIDS-App, HippUnfold, for defining and indexing individual-specific hippocampal folding in MRI, analogous to popular tools used in neocortical reconstruction. Such tailoring is critical for inter-individual alignment, with topology serving as the basis for homology. This topological framework enables qualitatively new analyses of morphological and laminar structure in the hippocampus or its subfields. It is critical for refining current neuroimaging analyses at a meso-as well as micro-scale. HippUnfold uses state-of-the-art deep learning combined with previously developed topological constraints to generate uniquely folded surfaces to fit a given subject’s hippocampal conformation. It is designed to work with commonly employed sub-millimetric MRI acquisitions, with possible extension to microscopic resolution. In this paper, we describe the power of HippUnfold in feature extraction, and highlight its unique value compared to several extant hippocampal subfield analysis methods.

Keywords: Brain Imaging Data Standards; computational anatomy; deep learning; hippocampal subfields; hippocampus; human; image segmentation; magnetic resonance imaging; neuroscience.

© 2022, DeKraker et al.