Toggle light / dark theme

I like this article; why? Because if I plan to make any investment into a robot that is my personal assistant, or housekeeper, or caregiver, etc. I want to ensure that they fit my own needs as a person. Many of us have taken some sort of a personality profile for work; interview for jobs where you were reviewed to be a “fit” culturaly, etc. as well as met people 1st before you hired them. So, why should be any different from the so called “humnoid robots?” And, this should be intriguing for some of us where only 6% of your gender thinks and processes information like you do.


Emotional behaviors can make your drone seem like it’s an adventurer, anti-social, or maybe just exhausted.

Read more

Interesting Question to ask.


The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create new legislation. That’s because there really aren’t any existing laws which encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed, and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. And because there is no consensus on what is right or wrong, any decision they make today will likely be changed tomorrow.

This is a prelude of things to come, not only with encryption technologies, but everything from artificial intelligence to drones, robotics, and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just the lawmakers who are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but as they learn more, they too will change their views.

Imagine if there was a terror attack in Silicon Valley — at the headquarters of Facebook or Apple. Do you think that Tim Cook or Mark Zuckerberg would continue to put privacy ahead of national security?

New Drone Stopping Gun that enables the public to target and gently bring down drones without damage. I see this offering some good use for victims of stalkers and robberies. However, criminals can use this to collect property from drone doing deliveries for companies like Amazon, Walmart, eBay, etc. However, most criminals today use rifles and shotguns to bring drones down; and the goods that they are carrying is often damaged as well. This will change that for them.


Capturing drones just got a little intense.

Read more

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

When The Verge began covering “drones” three years ago, we got a lot of grief about using that word: drone. These were just remote control toys, they couldn’t fly themselves! When drones got smart enough to navigate using GPS, and to follow people around, the naysayers pointed out they still couldn’t see anything. It could follow you, sure, but not while avoiding trees. At CES the last two years we finally saw drones that could sense and avoid real-world obstacles. But those were just tech demos, R&D projects which so far haven’t been made commercially available.

That all changes today with the introduction of DJI’s new drone, the Phantom 4. It’s the first consumer unit that can see the world around it and adjust accordingly, the next big step towards a truly autonomous aircraft. Try and drive it into a wall, the Phantom 4 will put on the brakes. If you ask it to fly from your position to a spot across a river, and there is a bridge in between, it will make a judgement call: increase speed to clear the obstacle or, if that isn’t possible, stop and hover in place, awaiting your next command.

The Phantom 4 accomplishes this feat with the help of five cameras: two on the front and two on the bottom, plus the main 4K camera that has always been onboard to capture video. The images captured by these cameras are run through computer vision software which constructs a 3D model of the world around it that the drone can intelligently navigate.

Read more

BMI’s (according to DARPA and David Axe) could begin as early as 2017 on humans. The plan is to use stentrodes. Testing has already proven success on sheep. I personally have concerns in both a health (as the article highlighted prone to blood clots) as well as anything connecting via Wi-Fi or the net with hackers trying to challenge themselves to prove anything is hackable; that before this goes live on a person we make sure that we have a more secure hack-resistant net before someone is injured or in case could injure someone else.


Soldiers could control drones with a thought.

Read more

On Monday at the Mobile World Congress in Barcelona, Mark Zuckerberg partook in what he thought would be a “fireside chat” with Wired’s Jessi Hempel but which was verifiably not fireside, and was, actually, a keynote.

Inverse picked out the best nine moments of this interview.

1.) Zuck doesn’t know that Aquila will meet regulations but is just confident that it’ll work out

Zuck reported that Aquila, Facebook’s casual wifi-beaming, solar-powered drone project, is coming along well. A team is currently constructing the second full-scale drone — which has the wingspan of a 747, is only as heavy as a car, and will be able to stay aloft for as long as six months — and another team is testing large-but-not-full-scale models every week. These drones will transmit high-bandwidth signals via a laser communications system, which, he says, require a degree of accuracy on par with hitting a quarter on the top of the Statue of Liberty with a laser pointer in California. The goal, he added, is to get these drones beaming wifi that’s 10 to 100 times faster than current systems. Facebook will roll out its first full-scale trials later this year, and Zuck expects that within 18 months, Aquila will be airborne.

Read more