Toggle light / dark theme

Microsoft is bringing its Cognitive Toolkit version 2.0 out of beta today and should be helping out a ton of companies who depend on tools to deploy deep learning at scale.

The Cognitive Toolkit or CNTK to some is a deep learning tool that helps companies speed up the process of image and speech recognition. Thanks to today’s update, CNTK can now be used by companies either on-premises or in the cloud combined with Azure GPUs.

Cognitive Toolkit is being used extensively by a wide variety of Microsoft products, by companies worldwide with a need to deploy deep learning at scale, and by students interested in the very latest algorithms and techniques. The latest version of the toolkit is available on GitHub via an open source license. Since releasing the beta in October 2016, more than 10 beta releases have been deployed with hundreds of new features, performance improvements and fixes.

Read more

Amazon is using a “simulated dog” to test its delivery drones, according to IBTimes.

The e-commerce giant wants to use drones to deliver parcels to customers in less than 30 minutes but it clearly has some concerns about how dogs might interfere.

At least one simulated dog is being used to “help Amazon see how UAVs [unmanned aerial vehicles] would respond to a canine trying to protect its territory,” according to IBTimes.

Read more

OpenAI vs. Deepmind in river raid ATARI.


AI research has a long history of repurposing old ideas that have gone out of style. Now researchers at Elon Musk’s open source AI project have revisited “neuroevolution,” a field that has been around since the 1980s, and achieved state-of-the-art results.

The group, led by OpenAI’s research director Ilya Sutskever, has been exploring the use of a subset of algorithms from this field, called “evolution strategies,” which are aimed at solving optimization problems.

Despite the name, the approach is only loosely linked to biological evolution, the researchers say in a blog post announcing their results. On an abstract level, it relies on allowing successful individuals to pass on their characteristics to future generations. The researchers have taken these algorithms and reworked them to work better with deep neural networks and run on large-scale distributed computing systems.

Read more

OpenAI researchers were surprised to discover that a neural network trained to predict the next character in texts from Amazon reviews taught itself to analyze sentiment. This unsupervised learning is the dream of machine learning researchers.

Much of today’s artificial intelligence (AI) relies on machine learning: where machines respond or react autonomously after learning information from a particular data set. Machine learning algorithms, in a sense, predict outcomes using previously established values. Researchers from OpenAI discovered that a machine learning system they created to predict the next character in the text of reviews from Amazon developed into an unsupervised system that could learn representations of sentiment.

“We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment,” OpenAI, a non-profit AI research company whose investors include Elon Musk, Peter Thiel, and Sam Altman, explained on their blog. OpenAI’s neural network was able to train itself to analyze sentiment by classifying reviews as either positive or negative, and was able to generate text with a desired sentiment.

Read more

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

Read more

The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.

http://www.weforum.org/

Read more

© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.

Download the press release : PR Synapses

Read more

Arranging employees in an office is like creating a 13-dimensional matrix that triangulates human wants, corporate needs, and the cold hard laws of physics: Joe needs to be near Jane but Jane needs natural light, and Jim is sensitive to smells and can’t be near the kitchen but also needs to work with the product ideation and customer happiness team—oh, and Jane hates fans. Enter Autodesk’s Project Discover. Not only does the software apply the principles of generative design to a workspace, using algorithms to determine all possible paths to your #officegoals, but it was also the architect (so to speak) behind the firm’s newly opened space in Toronto.

That project, overseen by design firm The Living, first surveyed the 300 employees who would be moving in. What departments would you like to sit near? Are you a head-down worker or an interactive one? Project Discover generated 10,000 designs, exploring different combinations of high- and low-traffic areas, communal and private zones, and natural-light levels. Then it matched as many of the 300 workers as possible with their specific preferences, all while taking into account the constraints of the space itself. “Typically this kind of fine-resolution evaluation doesn’t make it into the design of an office space,” says Living founder David Benjamin. OK, humans—you got what you wanted. Now don’t screw it up.

Read more