Toggle light / dark theme

The Universal Mind Revealed as a Multi-Layered Quantum Neural Network

In the sixties of the previous century, the science of Cybernetics emerged, which its founder Norbert Wiener defined as “the scientific study of control and communication in the animal and the machine.” Whereas the cyberneticists perhaps saw everything in the organic world too much as a machine type of regulatory network, the paradigm swapped to its mirror image, wherein everything in the natural world became seen as an organic neural network. Indeed, self-regulating networks appear to be ubiquitous: From the subatomic organization of atoms to the atomic organization of molecules, macromolecules, cells and organisms, everywhere the equivalent of neural networks appears to be present.

#EvolutionaryCybernetics #CyberneticTheoryofMind #PhilosophyofMind #QuantumTheory #cybernetics #evolution #consciousness


“At a deep level all things in our Universe are ineffably interdependent and interconnected, as we are part of the Matryoshka-like mathematical object of emergent levels of complexity where consciousness pervades all levels.” –Alex M. Vikoulov, The Syntellect Hypothesis.

Artificial Intelligence Has Become A Tool For Classifying And Ranking People

Recommending content, powering chatbots, trading stocks, detecting medical conditions, and driving cars. These are only a small handful of the most well-known uses of artificial intelligence, yet there is one that, despite being on the margins for much of AI’s recent history, is now threatening to grow significantly in prominence. This is AI’s ability to classify and rank people, to separate them according to whether they’re “good” or “bad” in relation to certain purposes.

At the moment, Western civilization hasn’t reached the point where AI-based systems are used en masse to categorize us according to whether we’re likely to be “good” employees, “good” customers, “good” dates and “good” citizens. Nonetheless, all available indicators suggest that we’re moving in this direction, and that this is regardless of whether Western nations consciously decide to construct the kinds of social credit system currently being developed by China.

This risk was highlighted at the end of September, when it emerged that an AI-powered system was being used to screen job candidates in the U.K. for the first time. Developed by the U.S.-based HireVue, it harnesses machine learning to evaluate the facial expressions, language and tone of voice of job applicants, who are filmed via smartphone or laptop and quizzed with an identical set of interview questions. HireVue’s platform then filters out the “best” applicants by comparing the 25,000 pieces of data taken from each applicant’s video against those collected from the interviews of existing “model” employees.

Autonomous Industrial Drones Now Fly Anywhere

There are four ways drones typically navigate. Either they use GPS or other beacons, or they accept guidance instructions from a computer, or they navigate off a stored map, or they are flown by an expert in control.

What do you when absolutely none of the four are possible?

You put AI on the drone and it flies itself with no outside source of data, no built-in mapping, and no operator in control.

The brain’s memory abilities inspire AI experts in making neural networks less ‘forgetful’

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay.”

First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect—” surprisingly efficiently”— from “catastrophic forgetting;” upon learning new lessons, the networks forget what they had learned before.

Siegelmann and colleagues point out that deep are the main drivers behind recent AI advances, but progress is held back by this forgetting.

New data processing module makes deep neural networks smarter

Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power.

“Feature normalization is a of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. “But they have mostly been treated separately. We found that combining them made them more efficient and effective.”

To test their AN module, the researchers plugged it into four of the most widely used neural architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. They then tested the networks against two industry standard benchmarks: the ImageNet-1000 classification and the MS-COCO 2017 object detection and instance segmentation benchmark.