A year into his tenure as CEO of Google, the low-key leader talks about what the company is, where it’s going, and how it gets things done.
Category: robotics/AI – Page 2,571
Ghost in the Shell Trailer
Trailer for Ghost in the Shell.
Based on the internationally-acclaimed sci-fi property, Ghost in the Shell follows Major, a special ops, one-of-a-kind human-cyborg hybrid, who leads the elite task force Section 9. Devoted to stopping the most dangerous criminals and extremists, Section 9 is faced with an enemy whose singular goal is to wipe out Hanka Robotic’s advancements in cyber technology.
A couple other actors were first considered before they finally settled on the current cast. Margot Robbie was firstly considered for the lead role of Major (Scarlett Johansson’s role). Matthias Schoenaerts was considered for the male lead of Batou. Pilou Asbæk was cast instead. Sam Riley was in talks for the role of the Laughing Man (Michael Pitt’s role).

Coming to Grips with Artificial Intelligence’s Many Manifestations
The categories of AI.
Click here to learn more about author James Kobielus.
Artificial intelligence (AI) is all the rage these days. However, people often overlook the fact that it’s a truly ancient vogue. I can’t think of another current high-tech mania whose hype curve got going during the days when Ike was in the White House, “I Love Lucy” was on the small screen, and programming in assembly language was state of the art.
As AI’s adoption grows, we run the risk of belittling the technology’s potential if we continue to fixate on the notion that it’s “artificial.” When you think of it, all technologies are artificial, pretty much by definition. Cars are artificial transportation, houses are artificial shelters, and so on.

CertiKOS: A Step Toward Hacker-Resistant Operating Systems
Researchers from Yale University have unveiled CertiKOS, the world’s first operating system that runs on multi-core processors and shields against cyber-attacks. Scientists believe this could lead to a new generation of reliable and secure systems software.
Led by Zhong Shao, professor of computer science at Yale, the researchers developed an operating system that incorporates formal verification to ensure that a program performs precisely as its designers intended — a safeguard that could prevent the hacking of anything from home appliances and Internet of Things (IoT) devices to self-driving cars and digital currency. Their paper on CertiKOS was presented at the 12th USENIX Symposium on Operating Systems Design and Implementation held Nov. 2–4 in Savannah, Ga.
Computer scientists have long believed that computers’ operating systems should have at their core a small, trustworthy kernel that facilitates communication between the systems’ software and hardware. But operating systems are complicated, and all it takes is a single weak link in the code — one that is virtually impossible to detect via traditional testing — to leave a system vulnerable to hackers.


The Future of Extremism: Artificial Intelligence and Synthetic Biology Will Transform Terrorism
There weren’t many people who had heard of bioterrorism before 9/11. But shortly after the September 11th terrorist attacks, a wave of anthrax mailings diverted the attention of the public towards a new weapon in the arsenal of terrorists—bioterrorism. A US federal prosecutor found that an army biological researcher was responsible for mailing the anthrax-laced letters, which killed 5 and sickened 15 people in 2001. The cases generated huge media attention, and the fear of a new kind of terrorist warfare was arising.
However, as with every media hype, the one about bioterrorism disappeared quickly.
But looking toward the future, I believe that we may not be paying as much attention to it as we should. Although it may be scary, we have to prepare ourselves for the worst. It is the only way we can be prepared to mitigate the damages of any harmful abuses if (and when) they arise.

IBM and NVIDIA Team Up on World’s Fastest Deep Learning Enterprise Solution
SALT LAKE CITY, Nov. 14, 2016 /PRNewswire/ IBM (NYSE: IBM) and NVIDIA (NVDA)today announced collaboration on a new deep learning tool optimized for the latest IBM and NVIDIA technologies to help train computers to think and learn in more human-like ways at a faster pace.
Deep learning is a fast growing machine learning method that extracts information by crunching through millions of pieces of data to detect and rank the most important aspects from the data. Publicly supported among leading consumer web and mobile application companies, deep learning is quickly being adopted by more traditional business enterprises.
Deep learning and other artificial intelligence capabilities are being used across a wide range of industry sectors; in banking to advance fraud detection through facial recognition; in automotive for self-driving automobiles and in retail for fully automated call centers with computers that can better understand speech and answer questions.

Scientists develop world’s first light-seeking synthetic nanorobot
With bots the size of a single blood cell, this could spur a huge leap in the field of non-invasive surgeries.
Scientists have developed the world’s first light-seeking synthetic nanorobot which may help surgeons remove tumours and enable more precise engineering of targeted medications.
It has been a dream in science fiction for decades that tiny robots can fundamentally change our daily life. The famous science fiction movie “Fantastic Voyage” is a very good example, with a group of scientists driving their miniaturised Nano-submarine inside human body to repair a damaged brain.

The Future of Deep-Learning—Nvidia Unveils Chip With 15 Billion Transistors
The Tesla P100 represents a large departure for Nvidia, a company that has focused almost solely on developing chips for workstations and gaming rigs. With the P100, Nvidia is setting their sights on data centers and deep-learning technology.
This is a huge risk as it involves the development of many other things, like a new architecture, new interconnect, and new process, all of which went into the creation of the Tesla P100.
“Our strategy is to accelerate deep learning everywhere,” said Huang.
Why chatbots are the last bridge to true AI
Humans have been storing, retrieving, manipulating, and communicating information since the Sumerians in Mesopotamia developed writing in 3000 BCE. Since then, we have continuously developed more and more sophisticated means to communicate and push information. Whether unconsciously or consciously, we seem to always need more data, faster than ever. And with every technological breakthrough that comes along, we also have a set of new concepts that reshape our world.
We can think back, for example, to Gutenberg’s printing press. Invented in 1440, it pushed printing costs down and gave birth to revolutionary concepts like catalogs (the first was published in 1495 in Venice by publisher Aldus Manutiu and listed all the books that he was printing), mass media (which enabled revolutionary ideas to transcend borders), magazines, newspapers, and so on. All these concepts emerged from a single “master” technology breakthrough and have had a great impact on every single aspect of individuals’ lives and the global world picture.
A hundred years later, the core idea of data distribution has not changed much. We still browse catalogs to buy our next pair of shoes, we create catalogs to sell our products and services, and we still browse publications looking for information.