Toggle light / dark theme

Ready for some mind blowing information…

“The past and the future exist together simultaneously in one geometric object.”

All time exists, all the time.

“Everything everywhere in one frozen moment of time and the past influences the future and the future influences the past in an endless feedback loop. Time is affecting all time, all the time. Every moment is co-creating ever other moment both forward and backward in time.”


This is a well done video that offers a theory of everything and a model that explains how our simulated reality is constructed and how it works. In this article, I’ve summarized the amazing ideas in this video with my own comments. Let’s get into some of the things discussed in “We Are Living In A Simulation – New Evidence!” from Real Spirit Dynamics. The Future Creates the Past, then the Past Creates the Future A higher dimensional Quasicrystal creates a 4D Quasicrystal that then projects a 3D Quasicrystal which is the fundamental substructure of all reality. Quasicrystals, angles and light form these dimensional projections. Read More →

Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.

In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable price point, for Intel to bring Nervana to the fore of the training marke t–and push its other products for inference at scale along with that current, it will take a herculean effort–one that Intel seems willing to invest in given its aggressive roadmap for the Nervana-based lineup.

The difference now is that at least we have some insight into how (and by how much) this architecture differs from GPUs–and where it might carve out a performance advantage and more certainly, a power efficiency one.

Read more

Lecture by Professor Oussama Khatib for Introduction to Robotics (CS223A) in the Stanford Computer Science Department.

Lecture 1 | introduction to robotics

In the first lecture of the quarter, Professor Khatib provides an overview of the course. CS223A is an introduction to robotics which covers topics such as Spatial Descriptions, Forward Kinematics, Inverse Kinematics, Jacobians, Dynamics, Motion Planning and Trajectory Generation, Position and Force Control, and Manipulator Design.

Read more