Archive for the ‘military’ category
Sep 21, 2014
Posted by Seb in categories: engineering, innovation, military
We’ve seen several attempts at making jetpacks that fly, but over at Arizona State University, a team is developing one for those who prefer staying closer to the ground. The DARPA-funded project (naturally) is called 4MM or 4 minute mile, and it aims to develop a jetpack that can provide soldiers that extra boost needed to run a full mile within four minutes. Sure, soldiers are physically fit, but the jetpack will make sure each one can do a 4-minute mile, even if they’re not particularly fast runners, and even if they’re carrying heavy equipment and armor.
Thus far, testers have been shaving seconds off their running time even while carrying the 11-pound jetpack, though the ASU researchers still have a ways to go to achieve their goal. Since being able to move fast without much rest can save your life in the battlefield, Harvard’s Soft Exosuit inventors should totally get together with these ASU researchers to make the ultimate getaway suit.
Sep 18, 2014
Posted by Steve Fuller in categories: alien life, biological, cyborg, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism
Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.
Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.
I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.
But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.
Jun 23, 2014
Posted by Seb in categories: biotech/medical, military, neuroscience
Cameron Scott — Singularity Hub
Deep brain stimulation as a treatment for epilepsy and movement disorders, most notably Parkinson’s disease, has rapidly gone from experimental to standard practice. With devices to provide delicate electro-stimulation to the brain now available and with maps of which neurons do what steadily gaining detail, attention is now shifting to using the approach to treat mental illness.
Jun 10, 2014
Posted by Harry J. Bentham in categories: drones, ethics, government, law, law enforcement, military, policy
Jun 2, 2014
Posted by Seb in categories: biotech/medical, military
Globe Staff — The Boston Globe
Keeping your attention
A growing body of research suggests noninvasive brain stimulation, such as transcranial direct current stimulation (tDCS), may improve specific cognitive skills in healthy subjects. Put another way, a small intermittent shock to your brain might keep your attention from eroding throughout the day.
May 25, 2014
‘In the Year 2054: Rifles will 3D print their own bullets’ – At Least According to Call of Duty Developer
Posted by Seb in categories: 3D printing, entertainment, futurism, military
by Eddie Krassenstein — 3DPrint.com
It’s always fun predicting the future. People do it all the time because it is entertaining to imagine a world that we or our children will one day have the chance to experience. We’ve seen fictitious movies do this from time to time since the beginning of film. There was the hoverboard in ‘Back to the Future’, the jet packs in ‘The Rocketeer’, teleportation in Star Trek, and the list goes on. Some of these inventions have already become a reality, while we are still awaiting the arrival of others.
Another Star Trek prediction, was that of the Replicator, which was used to basically 3D print objects, especially food. These have already begun to take shape in current times, in the form of 3D printers. MakerBot even calls their consumer level 3D printer the ‘Replicator’. Sure it may not work the exact same way, but its close enough.
Now, one video game development company, Sledgehammer Games, is trying to predict the future in their upcoming video game. We’re sure that most of you are well aware of the Call of Duty video game series. ‘Call of Duty: Advanced Warfare’ is currently scheduled for release this coming Novembmer. In the game, which takes place in the year 2054, Sledgehammer Games will try their hands at predicting the future themselves. One of the more notable futuristic ideas in the game, is that of the 3D-Printer Rifle.
Apr 12, 2014
Allen McDuffee — Wired
The U.S. Navy is tapping the power of the Force to wage war.
Its latest weapon is an electromagnetic railgun launcher. It uses a form of electromagnetic energy known as the Lorentz force to hurl a 23-pound projectile at speeds exceeding Mach 7. Engineers already have tested this futuristic weapon on land, and the Navy plans to begin sea trials aboard a Joint High Speed Vessel Millinocket in 2016.
“The electromagnetic railgun represents an incredible new offensive capability for the U.S. Navy,” Rear Adm. Bryant Fuller, the Navy’s chief engineer, said in a statement. “This capability will allow us to effectively counter a wide range of threats at a relatively low cost, while keeping our ships and sailors safer by removing the need to carry as many high-explosive weapons.”
Apr 1, 2014
Posted by Andres Agostini in categories: 3D printing, aging, alien life, astronomy, augmented reality, automation, big data, biological, bionic, bioprinting, biotech/medical, business, chemistry, climatology, complex systems, computing, cosmology, counterterrorism, cybercrime/malcode, cyborg, defense, disruptive technology, driverless cars, drones, economics, education, energy, engineering, environmental, ethics, evolution, existential risks, exoskeleton, finance, food, futurism, genetics, geopolitics, government, habitats, hardware, health, homo sapiens, human trajectories, information science, innovation, internet, law, law enforcement, life extension, lifeboat, medical, military, mobile phones, nanotechnology, neuroscience, open access, open source, philosophy, physics, policy, posthumanism, privacy, robotics/AI, science, scientific freedom, security, singularity, space, supercomputing, surveillance, sustainability, transhumanism, transparency, transportation
Mar 19, 2014
Posted by Seb in categories: defense, military, robotics/AI
By Evan Ackerman — IEEE Spectrum
Yesterday, DARPA announced the four companies that’ll be competing to develop a new experimental aircraft that combines the efficiency of an airplane with the versatility of a helicopter. It’ll be something like a V-22 Osprey, except that DARPA is hoping for “radical improvements in vertical and cruise flight capabilities.” Three of the companies provided concept art to DARPA; Boeing’s Phantom Swift is pictured above. And the thing that every proposal has in common? They’re all robots.
Robots weren’t a specific requirement for the VTOL X-Plane, but DARPA says that the best proposals ended up being unmanned. It shouldn’t be a surprise that this is the case; in a contest based on speed, efficiency, and payload, including a human pilot would be a significant disadvantage: humans are fragile and require a lot of maintenance, and it’s becoming increasingly arguable that a human in an aircraft has the potential to be more of a liability than an asset, at least in some cases, which may include (say) cargo delivery into dangerous areas.