Toggle light / dark theme

Mention artificial intelligence (AI) or artificial neural networks, and images of computers may come to mind. AI-based pattern recognition has a wide variety of real-world uses, such as medical diagnostics, navigation systems, voice-based authentication, image classification, handwriting recognition, speech programs, and text-based processing. However, artificial intelligence is not limited to digital technology and is merging with the realm of biology—synthetic biology and genomics, to be more precise. Pioneering researchers led by Dr. Lulu Qian at the California Institute of Technology (Caltech) have created synthetic biochemical circuits that are able to perform information processing at the molecular level–an artificial neural network consisting of DNA instead of computer hardware and software.

Artificial intelligence is in the early stages of a renaissance period—a rebirth that is largely due to advances in deep learning techniques with artificial neural networks that have contributed to improvements in pattern recognition. Specifically, the resurgence is largely due to a mathematical tool that calculates derivatives called backpropagation (backward propagation)—it enables artificial neural networks to adjust hidden layers of neurons when there are outlier outcomes for more precise results.

Artificial neural networks (ANN) are a type of machine learning method with concepts borrowed from neuroscience. The structure and function of the nervous system and brain were inspiration for artificial neural networks. Instead of biological neurons, ANNs have artificial nodes. Instead of synapses, ANNs have connections that are able to transmit signals between nodes. Like neurons, the nodes of ANNs are able to receive and process data, as well as activate other nodes connected to it.

Read more

Brief mention of AI implications…


As a mathematical concept, the idea of zero is relatively new in human society—and indisputably revolutionary. It’s allowed humans to develop algebra, calculus and Cartesian coordinates ; questions about its properties continue to incite mathematical debate today. So it may sound unlikely that bees — complex and community-based insects to be sure, but insects nonetheless — seem to have mastered their own numerical concept of nothingness.

Despite their sesame-seed-sized brains, honey bees have proven themselves the prodigies of the insect world. Researcher has found that they can count up to about four, distinguish abstract patterns, and communicate locations with other bees. Now, Australian scientists have found what may be their most impressive cognitive ability yet: “zero processing,” or the ability to conceptualize nothingness as a numerical value that can be compared with more tangible quantities like one and two.

While seemingly intuitive, the ability to understand zero is actually quite rare across species—and unheard of in invertebrates. In a press release, the authors of a paper published June 8 in the journal Science called species with this ability an “elite club” that consists of species we generally consider quite intelligent, including primates, dolphins and parrots. Even humans haven’t always been in that club: The concept of zero first appeared in India around 458 A.D, and didn’t enter the West until 1200, when Italian mathematician Fibonacci brought it and a host of other Arabic numerals over with him.

Graeme Ross: “Once again the over-riding need to measure the immeasurable raises it‘s ugly head. Statistics are proof of ignorance. Numbers are not knowledge. It has been mooted that we are a mental construct that incorporates multiple persona in our subconscious and semi-conscious mind. Find the theory for yourself. I wont quote what you can find yourselves. If we are a construct, ever-changing, ever-evolving in complexity and moment-to-moment inner focus, and if, as it has been mooted, we have constant and endless conversation with these ever-changing inner mental persona, then it follows that without capturing that process in mid-flight (as it were) we can‘t deduce the reasoning that results from these conversations. Therefore we are not able to quantify these processes in any way at all. It is ephemeral. Thought takes place in the interval between knowing and asking. Trying to build a machine that will think would take far more resources than mankind will ever possess.”


Abstract: This paper outlines the Independent Core Observer Model (ICOM) Theory of Consciousness defined as a computational model of consciousness that is objectively measurable and an abstraction produced by a mathematical model where the subjective experience of the system is only subjective from the point of view of the abstracted logical core or conscious part of the system where it is modeled in the core of the system objectively. Given the lack of agreed-upon definitions around consciousness theory, this paper sets precise definitions designed to act as a foundation or baseline for additional theoretical and real-world research in ICOM based AGI (Artificial General Intelligence) systems that can have qualia measured objectively.

Published via Conference/Review Board: ICIST 2018 – International Conference on Information Science and Technology – China – April 20-22nd. (IEEE conference) [release pending] and https://www.itm-conferences.org/

Introduction The Independent Core Observer Model Theory of Consciousness is partially built on the Computational Theory of Mind (Rescorla 2016) where one of the core issues with research into artificial general intelligence (AGI) is the absence of objective measurements and data as they are ambiguous given the lack of agreed-upon objective measures of consciousness (Seth 2007). To continue serious work in the field we need to be able to measure consciousness in a consistent way that is not presupposing different theories of the nature of consciousness (Dienes and Seth 2012) and further not dependent on various ways of measuring biological systems (Dienes and Seth 2010) but focused on the elements of a conscious mind in the abstract. With the more nebulous Computational Theory of Mind, research into the human brain does show some underlying evidence.

“Open Article” smile Spin-based quantum computers have the potential to tackle difficult mathematical problems that cannot be solved using ordinary computers, but many problems remain in making these machines scalable. Now, an international group of researchers led by the RIKEN Center for Emergent Matter Science have crafted a new architecture for quantum computing. By constructing a hybrid device made from two different types of qubit—the fundamental computing element of quantum computers –they have created a device that can be quickly initialized and read out, and that simultaneously maintains high control fidelity.


Single-spin qubits in semiconductor quantum dots hold promise for universal quantum computation with demonstrations of a high single-qubit gate fidelity above 99.9% and two-qubit gates in conjunction with a long coherence time. However, initialization and readout of a qubit is orders of magnitude slower than control, which is detrimental for implementing measurement-based protocols such as error-correcting codes. In contrast, a singlet-triplet qubit, encoded in a two-spin subspace, has the virtue of fast readout with high fidelity. Here, we present a hybrid system which benefits from the different advantages of these two distinct spin-qubit implementations. A quantum interface between the two codes is realized by electrically tunable inter-qubit exchange coupling. We demonstrate a controlled-phase gate that acts within 5.5 ns, much faster than the measured dephasing time of 211 ns. The presented hybrid architecture will be useful to settle remaining key problems with building scalable spin-based quantum computers.

Read more

‘’As a result, it’s nonsensical to ask what happens to space-time beyond the Cauchy horizon because space-time, as it’s regarded within the theory of general relativity, no longer exists. “This gives one a way out of this philosophical conundrum,” said Dafermos.


Mathematicians have disproved the strong cosmic censorship conjecture. Their work answers one of the most important questions in the study of general relativity and changes the way we think about space-time.

Read more