Toggle light / dark theme

Go to: https://brilliant.org/arvinash — you can sign up for free! And the first 200 people will get 20% off their annual membership. Enjoy!

Two new recently published, peer-reviewed scientific papers show that real warp drive designs based on real physics may be possible. They are realistic and physical, which had not been the case in the past. In a paper published in 1994, Mexican physicist Miguel Alcubierre showed theoretically that an FTL warp drive could work within the laws of physics. But it would require huge amounts of negative mass or energy. Such a thing is not known to exist.
0:00 Problem with C
2:21 General Relativity.
3:15 Alcubierre warp.
4:40 Bobrick & Martire solution.
7:15 Types of warp drives.
8:42 Spherical Warp drive.
11:32 FTL using Positive Energy.
13:14 Next steps.
14:08 Further education Brilliant.

In a recent paper published by Applied Physics, authors Alexey Bobrick and Gianni Martire, outline how a physically feasible warp drive could in principle, work, without the need for negative energy. I spoke to them. They had technical input on this video.

What Alcubierre did in his paper is figure out a shape that he believed spacetime needed to have in order for a ship to travel faster than light. Then he solved Einstein’s equation for general relativity to determine the matter and energy he would need to generate the desired curvature. It could only work with negative energy. This is mathematically consistent, but meaningless because negative mass is not known to exist. Negative mass is not the same as anti-matter. Antimatter has positive energy and mass.

To design their improved materials, Serles and Filleter worked with Professor Seunghwa Ryu and PhD student Jinwook Yeo at the Korea Advanced Institute of Science & Technology (KAIST) in Daejeon, South Korea. This partnership was initiated through U of T’s International Doctoral Clusters program, which supports doctoral training through research engagement with international collaborators.

The KAIST team employed the multi-objective Bayesian optimization machine learning algorithm. This algorithm learned from simulated geometries to predict the best possible geometries for enhancing stress distribution and improving the strength-to-weight ratio of nano-architected designs.

Serles then used a two-photon polymerization 3D printer housed in the Centre for Research and Application in Fluidic Technologies (CRAFT) to create prototypes for experimental validation. This additive manufacturing technology enables 3D printing at the micro and nano scale, creating optimized carbon nanolattices.

For example, to compute the magnetic susceptibility, we simply select the operator \(A=\beta {({S}^{z})}^{2}\), where β = 1/T is the inverse temperature. Interestingly, this method of estimating thermal expectation values is insensitive to uniform spectral broadening of each peak, due to a cancellation between the numerator and denominator (see discussion resulting in equation (S69) in Supplementary Information). However, it is highly sensitive to noise at low ω, which is exponentially amplified by eβω. To address this, we estimate the SNR for each DA(ω) independently and zero-out all points with SNR below three times the average SNR. This potentially introduces some bias by eliminating peaks with low signal but ensures that the effects of shot noise are well controlled.

To quantify the effect of noise on the engineered time dynamics, we simulate a microscopic error model by applying a local depolarizing channel with an error probability p at each gate. This results in a decay of the obtained signals for the correlator \({D}_{R}^{A}(t)\). The rate of the exponential decay grows roughly linearly with the weight of the measured operators (Extended Data Fig. 2). This scaling with operator weight can be captured by instead applying a single depolarizing channel at the end of the time evolution, with a per-site error probability of γt with an effective noise rate γ. This effective γ also scales roughly linear as a function of the single-qubit error rate per gate p (Extended Data Fig. 2).

Quantum simulations are constrained by the required number of samples and the simulation time needed to reach a certain target accuracy. These factors are crucial for determining the size of Hamiltonians that can be accessed for particular quantum hardware.

Mathematics and physics have long been regarded as the ultimate languages of the universe, but what if their structure resembles something much closer to home: our spoken and written languages? A recent study suggests that the mathematical equations used to describe physical laws follow a surprising pattern—a pattern that aligns with Zipf’s law, a principle from linguistics.

This discovery could reshape our understanding of how we conceptualize the universe and even how our brains work. Let’s explore the intriguing connection between the language of mathematics and the physical world.

What Is Zipf’s Law?

The OS axiom posits that reality operates like a computational construct. Think of it as an evolving cosmic master algorithm—a fractal code that is both our origin and our ultimate destiny. This axiom doesn’t diminish the beauty or mystery of existence; on the contrary, it elevates it. When we think of the universe as a computation, we realize that the laws of physics, the flow of time, and even the emergence of consciousness are not random accidents but inevitable outcomes of this higher-order system.

This concept naturally leads us to the Omega Singularity, a term I use to describe the ultimate point of universal complexity and consciousness. Inspired by Pierre Teilhard de Chardin’s Omega Point, this cosmological singularity is where all timelines of evolution, computation, and consciousness converge into a state of absolute unity—a state where the boundaries between the observer and the observed dissolve entirely. In The Omega Singularity, I elaborate on how this transcendent endpoint represents not just the culmination of physical reality but the quintessence of the “Universal Mind” capable of creating infinite simulations, much like we create virtual worlds today.

But let’s take a step back. How does this all relate to the OS axiom? If the universe is computational, it means that all processes—be they physical, biological, or cognitive—are governed by fundamental rules, much like a computer program. From the fractal geometry of snowflakes to the self-organizing principles of life and intelligence, we see the OS postulate at work everywhere. The question then becomes: Who or what wrote the code? Here, we enter the realm of metaphysics and theology, as explored in Theogenesis and The Syntellect Hypothesis. Could it be that we, as conscious agents, are co-authors of this universal script, operating within the nested layers of the Omega-God itself?

Once the detection mechanism is refined, the next milestone would be to interface that optical signal with a small experimental crystal. The choice of crystal is not arbitrary. Labs might experiment with rare-earth-ion-doped crystals like praseodymium-doped yttrium silicate, known for their capacity to store quantum information for microseconds to milliseconds, or possibly even seconds, under specialized conditions. At an early stage, the device would not store large swaths of complex data but might capture discrete bursts of neural activity corresponding to short-term memory formation. By demonstrating that these bursts can be reliably “written” into the crystal and subsequently “read” out at a later time, researchers would confirm the fundamental principle behind Hippocampus Sync-Banks: that ephemeral neural codes can be transcribed into a stable external medium.

Of course, storing a fleeting pattern is just one half of the puzzle. To realize the Sync-Bank concept fully, the same pattern must be reintroduced into the brain in a way that the hippocampus recognizes. Here, scientists would leverage neural stimulation techniques. In theory, the crystal would “release” the stored patterns in the form of carefully modulated optical or electrical signals. Specialized interfaces near or within the hippocampus—perhaps using microLED arrays or sophisticated electrode grids—would then convert those signals back into the language of the neurons. If the signals are replayed with the correct timing and intensity, the hippocampus might treat them as though they are its own native memory patterns, thereby reactivating the memory. Experimental validation could involve training an animal to associate a particular stimulus with a reward, capturing the neural trace, and then seeing if artificially stimulating that trace at a later time recalls the memory even in the absence of the original stimulus.

Such experiments would inevitably confront thorny technical issues. Neurons and synapses adapt or “rewire” themselves as learning progresses, and the hippocampus is far from static. Overlapping memory traces often share neurons, meaning that reintroducing one memory trace might partially interfere with or activate another. To address this, scientists would need real-time feedback loops that track how the hippocampus responds to artificial signals. Machine learning algorithms might adjust the reintroduced signal to better fit the updated neural state, ensuring that the stored pattern does not clash with changes in the memory landscape. In other words, a second or third generation of prototypes could incorporate adaptive feedback, not just a one-way feed of recorded data. This type of refinement would be crucial to the user’s experience, because we do not simply recall memories as static snapshots; each time we remember something, our brains incorporate subtle new contexts and associations.