AI’s Missing Link according to big tech.
We don’t know how to make software that learns without explicit instruction—but we need to if dreams of humanlike AI are to come true.
AI’s Missing Link according to big tech.
We don’t know how to make software that learns without explicit instruction—but we need to if dreams of humanlike AI are to come true.
Philip Lubin describes his appraoch to achieving laser driving spacecraft propulsion in the near term.
100kg robotic craft could be sent to Mars in 3 days.
1kg could go overnight to Mars.
50–100 GW could send a wafercraft to 30% of the speed of light and i would involve 10 minutes.
A video about how fast technological progress is going, how much technology has improved the world and the potential for technology to solve our most pressing challenges. Inspired in part by the book Abundance by Peter Diamandis and Steven Kotler, and by the video “Shift Happens 3.0” (also known as “Did You Know”) by Karl Fisch and Scott McLeod: https://www.youtube.com/watch?v=cL9Wu2kWwSY
Among the things mentioned are developments and possibilities within information technology, biotechnology, nanotechnology and artificial intelligence. The video also touches upon how several of these developments are exponential, but it does not get into the realm of technological singularity and the thoughts of people such as Ray Kurzweil, which is the topic of some of my other videos.
The guy who is speaking at the end is Peter Diamandis. The whole talk can be seen here: https://www.youtube.com/watch?v=1KxckI8Ttpw
SOURCES AND JUSTIFICATION FOR CLAIMS
http://howisearth.wordpress.com/2012/08/12/did-you-know-the-…you-think/
MUSIC
“I can´t stop” (the title does not really come as a shock) by Flux Pavilion. Thank you Flux!
If you like it you could, if you want to, buy it here: http://itunes.apple.com/gb/album/i-cant-stop-single/id510073535 or some other place.
The advent of 5G is likely to bring another splurge of investment, just as orders for 4G equipment are peaking. The goal is to be able to offer users no less than the “perception of infinite capacity”, says Rahim Tafazolli, director of the 5G Innovation Centre at the University of Surrey. Rare will be the device that is not wirelessly connected, from self-driving cars and drones to the sensors, industrial machines and household appliances that together constitute the “internet of things” (IoT).
It is easy to dismiss all this as “a lot of hype”, in the words of Kester Mann of CCS Insight, a research firm. When it comes to 5G, much is still up in the air: not only which band of radio spectrum and which wireless technologies will be used, but what standards makers of network gear and handsets will have to comply with. Telecoms firms have reached consensus only on a set of rough “requirements”. The most important are connection speeds of up to 10 gigabits per second and response times (“latency”) of below 1 millisecond (see chart).
Why a female voice as the digital assistant is explained.
At an artificial intelligence conference, a good question gets a surprising answer.
Posted in robotics/AI
Scientists have been running tests where artificial intelligences cultivate appropriate social behaviour by responding to simple narratives.
I hate to break the news to the UN’s CITO — has she ever heard of “Quantum Technology?” After AI flood into the scene; the next innovation that I and others are working on is Quantum Computing which will make AI, Internet, Cyber Security, devices, platforms, medical technology more advance with incredible performance.
The United Nations Chief Information Technology Officer spoke with TechRepublic about the future of cybersecurity, social media, and how to fix the internet and build global technology for social good.
Artificial intelligence, said United Nations chief information technology officer Atefeh Riazi, might be the last innovation humans create.
Personally, I am not a Breitbart fan; however, I am publishing this article to highlight something that I noticed. In this article it highlighted the 3 Rules of Robotics which are old and need to be updated. One of the rules is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” is not true. Why? Because as long as criminals who have enough money and can pay others well to re-engineer/ re-program robotics; robotics can become dangerous to humans. The drones today are good examples of how stalkers are using them, drug cartels, etc.
Robotics, once the almost exclusive purview of science fiction, is now approaching a point at which it will be capable of dramatic influence over humanity. These advancements are as much a lesson in caution as in the wonder of the human imagination.
Using gaming chips to read people’s images, etc. definitely makes sense especially as we move more and more in the AI connected experience.
Facebook, Google and Microsoft are tapping the power of a vintage computer gaming chip to raise your smartphone’s IQ with artificially intelligent programs that recognize faces and voices, translate conversations on the fly and make searches faster and more accurate.
This article is amusing on killer robots and how governments should address the threat of killer robots on a national level. On a national level if (in my case the US) we were invaded or a whole army of robots landed on the shores of Florida, NY, or CA; then yes Congress would need to approve war, etc. Which is what this article highlights. However, attacking robots will most likely not be the result of an invasion from another country; attacking robot/s will be the result of criminals; etc. that hacked/or reprogrammed the robotics.
Cartels, terrorists, etc. will pay well to have self driving cars, humanoid robots, etc. re-engineered and re-programmed for their own benefits and become a weapon against individuals and the population.
The United Nations’ effort to ban killer robots will fail, but there are three important steps the United States can take to help slow the rise of lethal autonomous weapons systems, one of the most prominent voices in the robotics debate said this week.
Pentagon officials insist they don’t want to allow an autonomous weapon to kill people without a human in the loop, but greater levels of autonomy and artificial intelligence are making their way into more and more pieces of military technology, like in recognizing targets, piloting drones, and driving supply trucks. Defense Department leaders advocate for robotic intelligence and autonomy as thread-reducing (and cost-saving) measures key to securing the United States’ technological advantage over adversaries over the coming decades (the so-called ‘third offset’ strategy). Defense Secretary Ash Carter, Deputy Defense Secretary Bob Work and former Defense Secretary Chuck Hagel have talked up the importance of artificial intelligence to the military’s future plans.
Related: Should We Have Laws to Control Robots Before They Control Us?
“We know we’re going to have to go somewhere with automation,” Air Force Brig. Gen. John Rauch, director of ISR (intelligence, surveillance, and reconnaissance) Capabilities for the Air Force, said at a Tuesday breakfast in Washington sponsored by AFCEA, a technology industry association. Rauch was referring to the rapidly growing demands on human image analysts in the Air Force, especially as additional small drones enter service in the years ahead. “It’s: ‘What level of automation is allowed?’ And then when you start talking about munitions, it becomes a whole nother situation.” The Air Force will be coming out with a flight plan for small unmanned aerial systems, or UAS’s, in the next four months, Lt. Gen. Robert Otto, deputy chief of staff for Intelligence, Surveillance and Reconnaissance, said at the meeting.