Toggle light / dark theme

In 15 years we’ll be able to upload education to our brains. So can I stop saving for my kids’ college?

I’m super excited to share this new Quartz article of mine, part of an ongoing personal debate about #transhumanism, #kids, and #education in my family:


But the age of downloading experience and expertise directly into our brain mainframe is coming. So is downloading professional training, including everything from becoming a police officer to practicing medicine or investigative journalism.

For many in the audience, I think that was the first time considering this could become a reality in our lifetime.

But in plenty of instances, brainwave tech is already here. People fly drones using mind-reading headsets. Parkinson’s disease patients can use brain chips to calm shaking attacks. Machine interfaces let people silently communicate mind-to-mind with one another, or with devices.

Brainwave technology works by recording the brain’s thought patterns—configurations of neurons that fire in distinct ways for different thoughts—and replicating those patterns back into the brain via electrical stimulation from a nonbiological device.

‘Near’ Infinite Data Compression Possible

Student from Canberra, Australia confirms that ‘near’ infinite data compression is possible.

Has proven that ‘near’ infinite compression of data is possible. Can shrink Terabytes of data to under 1440KB.Could technically store known, or ‘explored’, universe in an object smaller than a grapefruit.

  • (Smaller than a grapefruit seed in fact!).
  • How close to ‘zero’ (infinite) can you get?, much smaller than 1440KB he’ll say that much.

How quantum brain biology can rescue conscious free will

Conscious “free will” is problematic because brain mechanisms causing consciousness are unknown, measurable brain activity correlating with conscious perception apparently occurs too late for real-time conscious response, consciousness thus being considered “epiphenomenal illusion,” and determinism, i.e., our actions and the world around us seem algorithmic and inevitable. The Penrose–Hameroff theory of “orchestrated objective reduction (Orch OR)” identifies discrete conscious moments with quantum computations in microtubules inside brain neurons, e.g., 40/s in concert with gamma synchrony EEG. Microtubules organize neuronal interiors and regulate synapses. In Orch OR, microtubule quantum computations occur in integration phases in dendrites and cell bodies of integrate-and-fire brain neurons connected and synchronized by gap junctions, allowing entanglement of microtubules among many neurons. Quantum computations in entangled microtubules terminate by Penrose “objective reduction (OR),” a proposal for quantum state reduction and conscious moments linked to fundamental spacetime geometry. Each OR reduction selects microtubule states which can trigger axonal firings, and control behavior. The quantum computations are “orchestrated” by synaptic inputs and memory (thus “Orch OR”). If correct, Orch OR can account for conscious causal agency, resolving problem 1. Regarding problem 2, Orch OR can cause temporal non-locality, sending quantum information backward in classical time, enabling conscious control of behavior. Three lines of evidence for brain backward time effects are presented. Regarding problem 3, Penrose OR (and Orch OR) invokes non-computable influences from information embedded in spacetime geometry, potentially avoiding algorithmic determinism. In summary, Orch OR can account for real-time conscious causal agency, avoiding the need for consciousness to be seen as epiphenomenal illusion. Orch OR can rescue conscious free will.

Keywords: microtubules, free will, consciousness, Penrose-Hameroff Orch OR, volition, quantum computing, gap junctions, gamma synchrony.

We have the sense of conscious control of our voluntary behaviors, of free will, of our mental processes exerting causal actions in the physical world. But such control is difficult to scientifically explain for three reasons:

Neuromorphic Hardware: Trying to Put Brain Into Chips

Hi all.


Up until now, chip-makers have been piggybacking on the renowned Moore’s Law for delivering successive generations of chips that have more compute capabilities and are less power hungry. Now, these advancements are slowly coming to a halt. Researchers around the world are proposing alternative architectures to continue producing systems which are faster and more energy efficient. This article discusses those alternatives and reasons why one of them might have an edge over others in averting the chip design industry from getting stymied.

Moore’s law, or to put it differently — savior of chip-makers worldwide — was coined by Dr. Gordon Moore, the founder of Intel Corp, in 1965. The law states that the number of transistors on a chip would double every 2 years. But why the savior of chip-makers? This law was so powerful during the semiconductor boom that “people would auto-buy the next latest and greatest computer chip, with full confidence that it would be better than what they’ve got”, said former Intel engineer Robert P. Colwell. Back in the day writing a program with bad performance was not an issue as the programmer knew that Moore’s law would ultimately save him.

Problem that we are facing today is, the law is nearly dead! Or to avert from offending Moore fans — as Henry Samueli, chief technology officer for Broadcom says.

Neuroimaging Of Brain Shows Who Spoke To A Person And What Was Said

Flashback to 2 years ago…


Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article “‘Who’ is Saying ‘What’? Brain-Based Decoding of Human Voice and Speech,” the four authors demonstrate that speech sounds and voices can be identified by means of a unique ‘neural fingerprint’ in the listener’s brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.

Diamond on silicon chips are running at 100 Gigahertz and can also make power chips for directing 10,000 volts

Circa 2016


Diamond computer chips running at 100-GHz have been demonstrated by Akhan Semiconductor. They are currently using design rules in the 100s of nanometers.

Developers are focusing on power applications on 12-inch wafers. They hope to drive down the costs of production with higher volumes. Power devices are moving into pilot production at a fab. They are using the fab-lite model—that is produce small- to medium-sized runs themselves. They will then transfer their process to foundries when they ramp up into volume production.

They have some customers for diamond MEMS devices—specifically for capacitive switching arrays used to dynamically tune antenna in high-end smartphones.

/* */