The intelligence community’s R&D group wants technology that can detect attempts to evade biometric collection.
Photos from Simon Waslander’s post
Posted in futurism
Random Idea on Inequality and an Attempt to Fix it:
******A one-time Mandatory 50% Giving-Pledge Commitment by the Worlds Billionaires (while they are still alive).******
The massive assets collected thru this one-time Plegde. Should then be managed by an extremely broad team which is multi-ethnic, multi-academic, gender diverse and with members from all ranks of society ect ect.
How the assets attained thru this Mandatorry Giving Pledge will be used, will partly be decided by this extremly broad team. Every step and descision this team makes will be constantly open-sourced on the Internet 24/7.
Hopefully then we can make a step to making the World a better place for us all and our environment.
Nice.
Richard Feynman suggested that it takes a quantum computer to simulate large quantum systems, but a new study shows that a classical computer can work when the system has loss and noise.
The field of quantum computing originated with a question posed by Richard Feynman. He asked whether or not it was feasible to simulate the behavior of quantum systems using a classical computer, suggesting that a quantum computer would be required instead [1]. Saleh Rahimi-Keshari from the University of Queensland, Australia, and colleagues [2] have now demonstrated that a quantum process that was believed to require an exponentially large number of steps to simulate on a classical computer could in fact be simulated in an efficient way if the system in which the process occurs has sufficiently large loss and noise.
The quantum process considered by Rahimi-Keshari et al. is known as boson sampling, in which the probability distribution of photons (bosons) that undergo a linear optical process [3] is measured or sampled. In experiments of this kind [4, 5], NN single photons are sent into a large network of beams splitters (half-silvered mirrors) and combined before exiting through MM possible output channels. The calculation of the probability distribution for finding the photons in each of the MM output channels is equivalent to calculating the permanent of a matrix. The permanent is the same as the more familiar determinant but with all of the minus signs replaced with plus signs.
The US military has revealed plans for a hi-tech holographic ‘space command’.
It would allow military bosses to see in an instant where everything from enemy satellites to orbiting space stations were.
DARPA says they system will help the monitor enemy threats in space.
Because of a plethora of data from sensor networks, Internet of Things devices and big data resources combined with a dearth of data scientists to effectively mold that data, we are leaving many important applications – from intelligence to science and workforce management – on the table.
It is a situation the researchers at DARPA want to remedy with a new program called Data-Driven Discovery of Models (D3M). The goal of D3M is to develop algorithms and software to help overcome the data-science expertise gap by facilitating non-experts to construct complex empirical models through automation of large parts of the model-creation process. If successful, researchers using D3M tools will effectively have access to an army of “virtual data scientists,” DARPA stated.
+More on Network World: Feeling jammed? Not like this I bet+
Computer chips have stopped getting faster. For the past 10 years, chips’ performance improvements have come from the addition of processing units known as cores.
In theory, a program on a 64-core machine would be 64 times as fast as it would be on a single-core machine. But it rarely works out that way. Most computer programs are sequential, and splitting them up so that chunks of them can run in parallel causes all kinds of complications.
In the May/June issue of the Institute of Electrical and Electronics Engineers’ journal Micro, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new chip design they call Swarm, which should make parallel programs not only much more efficient but easier to write, too.