A DOCTOR who has dedicated his work to the quest for eternal life insists the record for the oldest living person will soon fall and someone already alive will keep going until they make 1,000.
The odds are now better than ever that future explorers, both robotic and human, will be able to take samples of the lunar’s hidden interior in deep impact basins like Crisium and Moscoviense. This gives planners more options on where to embed the first science colony.
Finding and sampling the Moon’s ancient interior mantle — one of the science drivers for sending robotic spacecraft and future NASA astronauts to the Moon’s South Pole Aitken basin — is just as likely achievable at similar deep impact basins scattered around the lunar surface.
At least that’s the view reached by planetary scientists who have been analyzing the most recent data from NASA’s Gravity Recovery And Interior Laboratory (GRAIL) and its Lunar Reconnaissance Orbiter (LRO) missions as well as from Japan’s SELENE (Kaguya) lunar orbiter.
The consensus is that the lunar crust is actually thinner than previously thought.
SpaceX has just published a stunning 360-degree video of its most recent feat: landing the first stage of the Falcon 9 rocket on a drone ship in the ocean. If you ever wanted feel like you’re standing under a spaceship that’s landing without the awful side effect of being burned to shreds, here’s your chance.
To be honest, we thought we had seen every angle of this historic moment by this point. We watched it happen live. We watched it in 4K. We saw photos that were taken from just about every conceivable and terrifying angle.
But SpaceX has never released a 360-degree video, so you’ve definitely never seen anything quite like this. Watching the rocket descend from above from the perspective of the ship is extremely surreal, especially when you hear the landing rockets kick in. So sit back, throw your phone in a headset if you have one, and hit play. This will hopefully be just the first of many more to come. (Now if only they had filmed 360-degree videos of the ones that blew up.)
I agree. Look at Australia or Canada as well as Israel or other companies rising up across Asia. In the next few years, Australia, China, and Israel will be key areas that folks should pay attention to as part of the “vNext Tech Valley” standard. Granted Silicon Valley will still be a leader; however, these other areas will be closing that gap.
Tech.eu contributor Jennifer Baker caught up with Ken Gabriel at the EIT Innovation Forum to talk about the difference between EU and US startups.
“These cables, whilst stylish, still put a large emphasis on practicality – having been crafted from durable, braided nylon designed to withstand wear and tear. The range also goes further, the company professes, by solving everyday problems such as ‘forgetting your cable, running out of battery on-the-go, or straining to use your device while charging’.”
Now, that’s an exhibit!
May 5, 2016, will mark the opening of a new and exciting exhibit at Chicago’s famed Museum of Science and Industry: an in-depth and interactive look behind the curtain at the Defense Advanced Research Projects Agency (DARPA).
DARPA was created in 1958 at the peak of the Cold War in response to the Soviet Union’s launch of Sputnik, the world’s first manmade satellite, which passed menacingly over the United States every 96 minutes. Tasked with preventing such strategic surprises in the future, the agency has achieved its mission over the years in part by creating a series of technological surprises of its own, many of which are highlighted in the Chicago exhibit, “Redefining Possible.”
“We are grateful to Chicago’s Museum of Science and Industry for inviting us to tell the DARPA story of ambitious problem solving and technological innovation,” said DARPA Deputy Director Steve Walker, who will be on hand for the exhibit’s opening day. “Learning how DARPA has tackled some of the most daunting scientific and engineering challenges—and how it has tolerated the risk of failure in order to have major impact when it succeeds—can be enormously inspiring to students. And for adults, we hope the exhibit will serve as a reminder that some of the most exciting work going on today in fields as diverse as chemistry, engineering, cyber defense and synthetic biology are happening with federal support, in furtherance of pressing national priorities.”
I do love Nvidia!
During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.
The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.
Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.
Closing the instability gap.
(Phys.org)—It might be said that the most difficult part of building a quantum computer is not figuring out how to make it compute, but rather finding a way to deal with all of the errors that it inevitably makes. Errors arise because of the constant interaction between the qubits and their environment, which can result in photon loss, which in turn causes the qubits to randomly flip to an incorrect state.
In order to flip the qubits back to their correct states, physicists have been developing an assortment of quantum error correction techniques. Most of them work by repeatedly making measurements on the system to detect errors and then correct the errors before they can proliferate. These approaches typically have a very large overhead, where a large portion of the computing power goes to correcting errors.
In a new paper published in Physical Review Letters, Eliot Kapit, an assistant professor of physics at Tulane University in New Orleans, has proposed a different approach to quantum error correction. His method takes advantage of a recently discovered unexpected benefit of quantum noise: when carefully tuned, quantum noise can actually protect qubits against unwanted noise. Rather than actively measuring the system, the new method passively and autonomously suppresses and corrects errors, using relatively simple devices and relatively little computing power.