Toggle light / dark theme

Yuri Milner is spending $100 million on a probe that could travel to Alpha Centauri within a generation—and he’s recruited Mark Zuckerberg and Stephen Hawking to help. In an interview with The Atlantic, Milner makes his case for star travel.

In the Southern Hemisphere’s sky, there is a constellation, a centaur holding a spear, its legs raised in mid-gallop. The creature’s front hoof is marked by a star that has long hypnotized humanity, with its brightness, and more recently, its proximity.

Since the dawn of written culture, at least, humans have dreamt of star travel. As the nearest star system to Earth, Alpha Centauri is the most natural subject of these dreams. To a certain cast of mind, the star seems destined to figure prominently in our future.

Read more

Hmm… That would explain Alzheimer disease — It’d be like some sort of unabashedly evil version of a smart phone data caps!

Or not.

wink


NEW YORK — Is the universe just an enormous, fantastically complex simulation? If so, how could we find out, and what would that knowledge mean for humanity?

These were the big questions that a group of scientists, as well as one philosopher, tackled on April 5 during the 17th annual Isaac Asimov Debate here at the American Museum of Natural History. The event honors Asimov, the visionary science-fiction writer, by inviting experts in diverse fields to discuss pressing questions on the scientific frontiers.

Older, but interesting…


Under the hypothesis that ordinary matter is ultimately made of subelementary constitutive primary charged entities or ‘‘partons’’ bound in the manner of traditional elementary Planck oscillators (a time-honored classical technique), it is shown that a heretofore uninvestigated Lorentz force (specifically, the magnetic component of the Lorentz force) arises in any accelerated reference frame from the interaction of the partons with the vacuum electromagnetic zero-point field (ZPF). Partons, though asymptotically free at the highest frequencies, are endowed with a sufficiently large ‘‘bare mass’’ to allow interactions with the ZPF at very high frequencies up to the Planck frequencies. This Lorentz force, though originating at the subelementary parton level, appears to produce an opposition to the acceleration of material objects at a macroscopic level having the correct characteristics to account for the property of inertia. We thus propose the interpretation that inertia is an electromagnetic resistance arising from the known spectral distortion of the ZPF in accelerated frames. The proposed concept also suggests a physically rigorous version of Mach’s principle. Moreover, some preliminary independent corroboration is suggested for ideas proposed by Sakharov (Dokl. Akad. Nauk SSSR 177, 70 (1968) [Sov. Phys. Dokl. 12, 1040 (1968)]) and further explored by one of us [H. E. Puthoff, Phys. Rev. A 39, 2333 (1989)] concerning a ZPF-based model of Newtonian gravity, and for the equivalence of inertial and gravitational mass as dictated by the principle of equivalence.

Read more

Another pre-Quantum Computing interim solution for super computing. So, we have this as well as Nvidia’s GPU. Wonder who else?


In summer 2015, US president Barack Obama signed an order intended to provide the country with an exascale supercomputer by 2025. The machine would be 30 times more powerful than today’s leading system: China’s Tianhe-2. Based on extrapolations of existing electronic technology, such a machine would draw close to 0.5GW – the entire output of a typical nuclear plant. It brings into question the sustainability of continuing down the same path for gains in computing.

One way to reduce the energy cost would be to move to optical interconnect. In his keynote at OFC in March 2016, Professor Yasuhiko Arakawa of University of Tokyo said high performance computing (HPC) will need optical chip to chip communication to provide the data bandwidth for future supercomputers. But digital processing itself presents a problem as designers try to deal with issues such as dark silicon – the need to disable large portions of a multibillion transistor processor at any one time to prevent it from overheating. Photonics may have an answer there as well.

Optalysys founder Nick New says: “With the limits of Moore’s Law being approached, there needs to be a change in how things are done. Some technologies are out there, like quantum computing, but these are still a long way off.”

Read more

When I read articles like this one; I wonder if folks really fully understand the full impact of what Quantum brings to all things in our current daily lives.


The high performance computing market is going through a technology transition – the Co-Design transition. As has already been discussed in many articles, this transition has emerged in order to solve the performance bottlenecks of today’s infrastructures and applications, performance bottlenecks that were created by multi-core CPUs and the existing CPU-centric system architecture.

How are multi-core CPUs the source for today’s performance bottlenecks? In order to understand that, we need to go back in time to the era of single-core CPUs. Back then, performance gains came from increases in CPU frequency and from the reduction of networking functions (network adapter and switches). Each new generation of product brought faster CPUs and lower-latency network adapters and switches, and that combination was the main performance factor. But this could not continue forever. The CPU frequency could not be increased any more due to power limitations, and instead of increasing the speed of the application process, we began using more CPU cores in parallel, thereby executing more processes at the same time. This enabled us to continue improving application performance, not by running faster, but by running more at the same time.

This new paradigm of increasing the amount of CPU cores dramatically increased the burden on the interconnect, and, moreover, changed the interconnect into the main performance enabler of the system. The key performance concern was how fast all the CPU processes could be synchronized and how fast data could be aggregated and distributed between them.

Read more