Toggle light / dark theme

Machines lace almost all social, political cultural and economic issues currently being discussed. Why, you ask? Clearly, because we live in a world that has all its modern economies and demographic trends pivoting around machines and factories at all scales.

We have reached the stage in the evolution of our civilization where we cannot fathom a day without the presence of machines or automated processes. Machines are not only used in sectors of manufacturing or agriculture but also in basic applications like healthcare, electronics and other areas of research. Although, machines of varying types had entered the industrial landscape long ago, technologies like nanotechnology, the Internet of Things, Big Data have altered the scenario in an unprecedented manner.

The fusion of nanotechnology with conventional mechanical concepts gives rise to the perception of ‘molecular machines’. Foreseen to be a stepping stone into nano-sized industrial revolution, these microscopic machines are molecules designed with movable parts that behave in a way that our regular machines operate in. A nano-scale motor that spins in a given direction in presence of directed heat and light would be an example of a molecular machine.

Read more

New biomarkers for aging is good news for researchers!


“Given the high volume of data being generated in the life sciences, there is a huge need for tools that make sense of that data. As such, this new method will have widespread applications in unraveling the molecular basis of age-related diseases and in revealing biomarkers that can be used in research and in clinical settings. In addition, tools that help reduce the complexity of biology and identify important players in disease processes are vital not only to better understand the underlying mechanisms of age-related disease but also to facilitate a personalized medicine approach. The future of medicine is in targeting diseases in a more specific and personalized fashion to improve clinical outcomes, and tools like iPANDA are essential for this emerging paradigm,” said João Pedro de Magalhães, PhD, a trustee of the Biogerontology Research Foundation.

The algorithm, iPANDA, applies deep learning algorithms to complex gene expression data sets and signal pathway activation data for the purposes of analysis and integration, and their proof of concept article demonstrates that the system is capable of significantly reducing noise and dimensionality of transcriptomic data sets and of identifying patient-specific pathway signatures associated with breast cancer patients that characterize their response to Toxicol-based neoadjuvant therapy.

The system represents a substantially new approach to the analysis of microarray data sets, especially as it pertains to data obtained from multiple sources, and appears to be more scalable and robust than other current approaches to the analysis of transcriptomic, metabolomic and signalomic data obtained from different sources. The system also has applications in rapid biomarker development and drug discovery, discrimination between distinct biological and clinical conditions, and the identification of functional pathways relevant to disease diagnosis and treatment, and ultimately in the development of personalized treatments for age-related diseases.

Read more

In Brief:

  • Researchers have created a heuristically trained neural network that outperformed conventional machine learning algorithms by 160 percent and its own training by 9 percent.
  • This new teaching method could enable AI to make correct classifications of data that’s previously unknown or unclassified, learning information beyond its data set.

Machine learning technology in neural networks has been pushing artificial intelligence (AI) development to new heights. Most AI systems learn to do things using a set of labelled data provided by their human programmers. Parham Aarabi and Wenzhi Guo, engineers from the University of Toronto, Canada have taken machine learning to a different level, developing an algorithm that can learn things on its own, going beyond its training.

Read more

Google has built machine learning systems that can create their own cryptographic algorithms — the latest success for AI’s use in cybersecurity. But what are the implications of our digital security increasingly being handed over to intelligent machines?

Google Brain, the company’s California-based AI unit, managed the recent feat by pitting neural networks against each other. Two systems, called Bob and Alice, were tasked with keeping their messages secret from a third, called Eve. None were told how to encrypt messages, but Bob and Alice were given a shared security key that Eve didn’t have access too.

ai-cybersecurity-7

In the majority of tests the pair fairly quickly worked out a way to communicate securely without Eve being able to crack the code. Interestingly, the machines used some pretty unusual approaches you wouldn’t normally see in human generated cryptographic systems, according to TechCrunch.

Read more

To better understand how the brain identifies patterns and classifies objects — such as understanding that a green apple is still an apple even though it’s not red — Sandia National Laboratories and the Intelligence Advanced Research Projects Activity are working to build algorithms that can recognize visual subtleties the human brain can divine in an instant.

They are overseeing a program called Machine Intelligence from Cortical Networks, which seeks to supercharge machine learning by combining neuroscience and data science to reverse-engineer the human brain’s processes. IARPA launched the effort in 2014.

Sandia officials recently announced plans to referee the brain algorithm replication work of three university-led teams. The teams will map the complex wiring of the brain’s visual cortex, which makes sense of input from the eyes, and produce algorithms that will be tested over the next five years.

Read more

Quantum computing is about to get more complex. Researchers have evidence that large molecules made of nickel and chromium can store and process information in the same way bytes do for digital computers. The researchers present algorithms proving it’s possible to use supramolecular chemistry to connect “qubits,” the basic units for quantum information processing, in Chem on November 10. This approach would generate several kinds of stable qubits that could be connected together into structures called “two-qubit gates.”

“We have shown that the chemistry is achievable for bringing together two-qubit gates,” says senior author Richard Winpenny, Head of the University of Manchester School of Chemistry. “The molecules can be made and the two-qubit gates assembled. The next step is to show that these two-qubit gates work.”

Read more

AI good for internal back office and some limited front office activities; however, still need to see more adoption of QC in the Net and infrastructure in companies to expose their services and information to the public net & infrastructure.


Deep learning, as explained by tech journalist Michael Copeland on Blogs.nvidia.com, is the newest and most powerful computational development thus far. It combines all prior research in artificial intelligence (AI) and machine learning. At its most fundamental level, Copeland explains, deep learning uses algorithms to peruse massive amounts of data, and then learn from that data to make decisions or predictions. The Defense Agency Advanced Project Research (DARPA), as Wired reports, calls this method “probabilistic programming.”

Mimicking the human brain’s billions of neural connections by creating artificial neural networks was thought to be the path to AI in the early days, but it was too “computationally intensive.” It was the invention of Nvidia’s powerful graphics processing unit (GPU), that allowed Andre Ng, a scientist at Google, to create algorithms by “building massive artificial neural networks” loosely inspired by connections in the human brain. This was the breakthrough that changed everything. Now, according to Thenextweb.com, Google’s Deep Mind platform has been proven to teach itself, without any human input.

In fact, earlier this year an AI named AlphaGO, developed by Google’s Deep Mind division, beat Lee Sedol, a world master of the 3000 year-old Chinese game GO, described as the most complex game known to exist. AlphaGO’s creators and followers now say this Deep Learning AI proves that machines can learn and may possibly demonstrate intuition. This AI victory has changed our world forever.

Read more