Toggle light / dark theme

Battle of the Bots: How AI Is Taking Over the World of Cybersecurity

Google has built machine learning systems that can create their own cryptographic algorithms — the latest success for AI’s use in cybersecurity. But what are the implications of our digital security increasingly being handed over to intelligent machines?

Google Brain, the company’s California-based AI unit, managed the recent feat by pitting neural networks against each other. Two systems, called Bob and Alice, were tasked with keeping their messages secret from a third, called Eve. None were told how to encrypt messages, but Bob and Alice were given a shared security key that Eve didn’t have access too.

ai-cybersecurity-7

In the majority of tests the pair fairly quickly worked out a way to communicate securely without Eve being able to crack the code. Interestingly, the machines used some pretty unusual approaches you wouldn’t normally see in human generated cryptographic systems, according to TechCrunch.

Building the foundation for AI-enabled computer vision

To better understand how the brain identifies patterns and classifies objects — such as understanding that a green apple is still an apple even though it’s not red — Sandia National Laboratories and the Intelligence Advanced Research Projects Activity are working to build algorithms that can recognize visual subtleties the human brain can divine in an instant.

They are overseeing a program called Machine Intelligence from Cortical Networks, which seeks to supercharge machine learning by combining neuroscience and data science to reverse-engineer the human brain’s processes. IARPA launched the effort in 2014.

Sandia officials recently announced plans to referee the brain algorithm replication work of three university-led teams. The teams will map the complex wiring of the brain’s visual cortex, which makes sense of input from the eyes, and produce algorithms that will be tested over the next five years.

Stable quantum bits can be made from complex molecules

Quantum computing is about to get more complex. Researchers have evidence that large molecules made of nickel and chromium can store and process information in the same way bytes do for digital computers. The researchers present algorithms proving it’s possible to use supramolecular chemistry to connect “qubits,” the basic units for quantum information processing, in Chem on November 10. This approach would generate several kinds of stable qubits that could be connected together into structures called “two-qubit gates.”

“We have shown that the chemistry is achievable for bringing together two-qubit gates,” says senior author Richard Winpenny, Head of the University of Manchester School of Chemistry. “The molecules can be made and the two-qubit gates assembled. The next step is to show that these two-qubit gates work.”

AI takeover: Google’s ‘DeepMind’ platform can learn and think on it’s own without human input

AI good for internal back office and some limited front office activities; however, still need to see more adoption of QC in the Net and infrastructure in companies to expose their services and information to the public net & infrastructure.


Deep learning, as explained by tech journalist Michael Copeland on Blogs.nvidia.com, is the newest and most powerful computational development thus far. It combines all prior research in artificial intelligence (AI) and machine learning. At its most fundamental level, Copeland explains, deep learning uses algorithms to peruse massive amounts of data, and then learn from that data to make decisions or predictions. The Defense Agency Advanced Project Research (DARPA), as Wired reports, calls this method “probabilistic programming.”

Mimicking the human brain’s billions of neural connections by creating artificial neural networks was thought to be the path to AI in the early days, but it was too “computationally intensive.” It was the invention of Nvidia’s powerful graphics processing unit (GPU), that allowed Andre Ng, a scientist at Google, to create algorithms by “building massive artificial neural networks” loosely inspired by connections in the human brain. This was the breakthrough that changed everything. Now, according to Thenextweb.com, Google’s Deep Mind platform has been proven to teach itself, without any human input.

In fact, earlier this year an AI named AlphaGO, developed by Google’s Deep Mind division, beat Lee Sedol, a world master of the 3000 year-old Chinese game GO, described as the most complex game known to exist. AlphaGO’s creators and followers now say this Deep Learning AI proves that machines can learn and may possibly demonstrate intuition. This AI victory has changed our world forever.

Scientists have built a Nightmare Machine to generate the scariest images ever

We’re supposed to be building robots and AI for the good of humankind, but scientists at MIT have pretty much been doing the opposite — they’ve built a new kind of AI with the sole purpose of generating the most frightening images ever.

Just in time for Halloween, the aptly named Nightmare Machine uses an algorithm that ‘learns’ what humans find scary, sinister, or just downright unnerving, and generates images based on what it thinks will freak us out the most.

“There have been a rising number of intellectuals, including Elon Musk and Stephen Hawking, raising alarms about the potential threat of superintelligent AI on humanity,” one of the team, Pinar Yanardag Delul, told Digital Trends.

Nightmare Machine at CSIRO is slowly but surely learning how to terrify humans

Story just in time for Halloween.


The prospect of artificial intelligence is scary enough for some, but Manuel Cebrian Ramos at CSIRO’s Data61 is teaching machines how to terrify humans on purpose.

Dr Cebrian and his colleagues Pinar Yanardag and Iyad Rahwan at the Massachusetts Institute of Technology have developed the Nightmare Machine.

This is an artificial intelligence algorithm that is teaching a new generation of computers not only what terrifies human beings, but also how to create new images to scare us.

Google’s neural networks created their own encryption method

Fortifying cybersecurity is on everyone’s mind after the massive DDoS attack from last week. However, it’s not an easy task as the number of hackers evolves the same as security. What if your machine can learn how to protect itself from prying eyes? Researchers from Google Brain, Google’s deep Learning project, has shown that neural networks can learn to create their own form of encryption.

According to a research paper, Martín Abadi and David Andersen assigned Google’s AI to work out how to use a simple encryption technique. Using machine learning, those machines could easily create their own form of encrypted message, though they didn’t learn specific cryptographic algorithms. Albeit, compared to the current human-designed system, that was pretty basic, but an interesting step for neural networks.

To find out whether artificial intelligence could learn to encrypt on its own or not, the Google Brain team built an encryption game with its three different entities: Alice, Bob and Eve, powered by deep learning neural networks. Alice’s task was to send an encrypted message to Bob, Bob’s task was to decode that message, and Eve’s job was to figure out how to eavesdrop and decode the message Alice sent herself.