Toggle light / dark theme

Ask an Information Architect, CDO, Data Architect (Enterprise and non-Enterprise) they will tell you they have always known that information/ data is a basic staple like Electricity all along; and glad that folks are finally realizing it. So, the same view that we apply to utilities as core to our infrastructure & survival; we should also apply the same value and view about information. And, in fact, information in some areas can be even more important than electricity when you consider information can launch missals, cure diseases, make you poor or wealthy, take down a government or even a country.


What is information? Is it energy, matter, or something completely different? Although we take this word for granted and without much thought in today’s world of fast Internet and digital media, this was not the case in 1948 when Claude Shannon laid the foundations of information theory. His landmark paper interpreted information in purely mathematical terms, a decision that dematerialized information forever more. Not surprisingly, there are many nowadays that claim — rather unthinkingly — that human consciousness can be expressed as “pure information”, i.e. as something immaterial graced with digital immortality. And yet there is something fundamentally materialistic about information that we often ignore, although it stares us — literally — in the eye: the hardware that makes information happen.

As users we constantly interact with information via a machine of some kind, such as our laptop, smartphone or wearable. As developers or programmers we code via a computer terminal. As computer or network engineers we often have to wade through the sheltering heat of a server farm, or deal with the material properties of optical fibre or copper in our designs. Hardware and software are the fundamental ingredients of our digital world, both necessary not only in engineering information systems but in interacting with them as well. But this status quo is about to be massively disrupted by Artificial Intelligence.

A decade from now the postmillennial youngsters of the late 2020s will find it hard to believe that once upon a time the world was full of computers, smartphones and tablets. And that people had to interact with these machines in order to access information, or build information systems. For them information would be more like electricity: it will always be there, and always available to power whatever you want to do. And this will be possible because artificial intelligence systems will be able to manage information complexity so effectively that it will be possible to deliver the right information at the right person at the right time, almost at an instant. So let’s see what that would mean, and how different it would be from what we have today.

Cambridge University spin-out Optalysys has been awarded a $350k grant for a 13-month project from the US Defense Advanced Research Projects Agency (DARPA). The project will see the company advance their research in developing and applying their optical co-processing technology to solving complex mathematical equations. These equations are relevant to large-scale scientific and engineering simulations such as weather prediction and aerodynamics.

The Optalysys technology is extremely energy efficient, using light rather than electricity to perform intensive mathematical calculations. The company aims to provide existing computer systems with massively boosted processing capabilities, with the aim to eventually reach exaFLOP rates (a billion billion calculations per second). The technology operates at a fraction of the energy cost of conventional high-performance computers (HPCs) and has the potential to operate at orders of magnitude faster.

In April 2015 Optalysys announced that they had successfully built a scaleable, lens-less optical processing prototype that can perform mathematical functions. Codenamed Project GALELEO, the device demonstrates that second order derivatives and correlation pattern matching can be performed optically in a scaleable design.

Read more

I read this article and it’s complaints about the fragile effects of data processing and storing information in a Quantum Computing platform. However, I suggest the writer to review the news released 2 weeks ago about the new Quantum Data Bus highlighted by PC World, GizMag, etc. It is about to go live in the near future. Also, another article to consider is today’s Science Daily articile on electron spin currents which highlights how this technique effectively processes information.


Rare-earth materials are prime candidates for storing quantum information, because the undesirable interaction with their environment is extremely weak. Consequently however, this lack of interaction implies a very small response to light, making it hard to read and write data. Leiden physicists have now observed a record-high Purcell effect, which enhances the material’s interaction with light. Publication on April 25 in Nature Photonics (“Multidimensional Purcell effect in an ytterbium-doped ring resonator”).

Ordinary computers perform calculations with bits—ones and zeros. Quantum computers on the other hand use qubits. These information units are a superposition of 0 and 1; they represent simultaneously a zero and a one. It enables quantum computers to process information in a totally different way, making them exponentially faster for certain tasks, like solving mathematical problems or decoding encryptions.

Fragile.

The difficult part now is to actually build a quantum computer in real life. Rather than silicon transistors and memories, you will need physical components that can process and store quantum information, otherwise the key to the whole idea is lost. But the problem with quantum systems is that they are more or less coupled to their environments, making them lose their quantum properties and become ‘classical’. Thermal noise, for example, can destroy the whole system. It makes quantum systems extremely fragile and hard to work with.

Read more

New research by UCSF scientists could accelerate – by 10 to 100-fold – the pace of many efforts to profile gene activity, ranging from basic research into how to build new tissues from stem cells to clinical efforts to detect cancer or auto-immune diseases by profiling single cells in a tiny drop of blood.

The study, published online April 27, 2016, in the journal Cell Systems, rigorously demonstrates how to extract high-quality information about the patterns of in individual cells without using expensive and time-consuming technology. The paper’s senior authors are Hana El-Samad, PhD, an associate professor of biochemistry and biophysics at UCSF, and Matt Thomson, PhD, a faculty fellow in UCSF’s Center for Systems and Synthetic Biology.

“We believe the implications are huge because of the fundamental tradeoff between depth of sequencing and throughput, or cost,” said El-Samad. “For example, suddenly, one can think of profiling a whole tumor at the single cell level.”

Read more

A team of computer scientists from the University of Southern California (USC) have been successful in developing a new method to alleviate wildlife poaching. The National Science Foundation (NSF) funded the project that has created a model for ‘green security games’.

This model is based on game theory to safeguard wildlife from poachers. Game theory involves predicting the actions of enemy using mathematical equations and subsequently formulating the best possible restrain moves. This model will enable more efficient patrolling of parks and wildlife by park rangers.

An artificial intelligence (AI) application, known as Protection Assistant for Wildlife Sanctuary (PAWS) was developed by Fei Fang, a Ph.D. candidate in the computer science department at USC and Milind Tambe, a professor of computer science and systems engineering at USC, in 2013. The team has since then spent a couple of years to test the effectiveness of the application in Uganda and Malaysia.

Read more

“The station regularly passes out of range of the Tracking and Relay Data Satellites (TDRS) used to send and receive video, voice and telemetry from the station,” a spokesperson for NASA told ValueWalk.

The only problem with this explanation, of course, is that it’s so much more boring…”

It is, of course, highly unlikely that this was some alien ship. That said, those tracking and relay stations are fixed and known locations. Also, the range and power of the ISS communication systems are well known, non-classified public domain knowledge. I suck at math, but it should only be a matter of taking the exact time and duration of this outage and comparing it to the tracking and relay station stats.


A horseshoe-shaped apparition has UFO trackers seeing stars.

Lov’n Quantum Espresso


Researchers use specialized software such as Quantum ESPRESSO and a variety of HPC software in conducting quantum materials research. Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves and pseudo potentials. Quantum ESPRESSO is coordinated by the Quantum ESPRESSO Foundation and has a growing world-wide user community in academic and industrial research. Its intensive use of dense mathematical routines makes it an ideal candidate for many-core architectures, such as the Intel Xeon Phi coprocessor.

The Intel Parallel Computing Centers at Cineca and Lawrence Berkeley National Lab (LBNL) along with the National Energy Research Scientific Computing Center (NERSC) are at the forefront in using HPC software and modifying Quantum ESPRESSO (QE) code to take advantage of Intel Xeon processors and Intel Xeon Phi coprocessors used in quantum materials research. In addition to Quantum ESPRESSO, the teams use tools such as Intel compilers, libraries, Intel VTune and OpenMP in their work. The goal is to incorporate the changes they make to Quantum ESPRESSO into the public version of the code so that scientists can gain from the modification they have made to improve code optimization and parallelization without requiring researchers to manually modify legacy code.

Figure 2: The electronic density of states calculated by Quantum ESPRESSO. This is one of the key properties that permit researchers to understand the electrical properties of the device. Courtesy of 1)  A. Calzolari - National Research Council of Italy - Institute for Nanoscience (CNR-NANO), 2)  R. Colle – University of Bologna (Italy), 3)  C. Cavazzoni – Cineca (Italy)  and 4)  E. Pascolo – OGS (Italy)
Electrical conductivity of a PDI-FCN2 molecule.

Twenty six hundred years ago, a band of Judahite soldiers kept watch on their kingdom’s southern border in the final days before Jerusalem was sacked by Nebuchadnezzar. They left behind numerous inscriptions—and now, a groundbreaking digital analysis has revealed how many writers penned them. The research and innovative technology behind it stand to teach us about the origins of the Bible itself.

“It’s well understood that the Bible was not composed in real time but was probably written and edited later,” Arie Shaus, a mathematician at Tel Aviv University told Gizmodo. “The question is, when exactly?”

Shaus is one of several mathematicians and archaeologists trying to broach that question in a radical manner: by using machine learning tools to determine how many people were literate in ancient times. Their first major analysis, which appears today in the Proceedings of the National Academies of Sciences, suggests that the ability to read and write was widespread throughout the Kingdom of Judah, setting the stage for the compilation of Biblical texts.

Read more

Part 2


In part 1 of the journey, we saw the leading observations that needed explanation. Explanations that we want to do through the theory of relativity and quantum mechanics. No technical and expert knowledge in these theories yet, only scratches of its implications. So let us continue.

THE RELATIVITY THEORY Deducing from the Hubble expansion, the galaxies were close in the distant past but certainly not in this current form as the telescopes now see them receding. In fact, if they were receding it also means they were expanding.

Therefore, when we reverse the receding galaxies into the far distant past they should end up at a point somewhere sometime with the smallest imaginable extension, if that extension is conceivable at all. Scientists call it the singularity, a mathematical deduction from the relativity theory. How did this immeasurable Universe made of clusters of galaxies we now see ever existed in that point called singularity?

Read more

Like this article highlights; we will see a day soon when all techies will need some level of bio-science and/ or medical background especially as we move closer to Singularity which is what we have seen predicted by Ray Kurzweil and others. In the coming decade/s we will no longer see tech credentials relying strictly on math/ algorithms, code, etc, Techies will need some deeper knowledge around the natural sciences.


If you are majoring in biology right now, I say to you: that was a good call. The mounting evidence suggests that you placed your bet on the right degree. With emergent genetic recombination technologies improving at breakneck speed alongside a much deepened understanding of biological circuitry in simple, “home grown” metabolic systems, this field is shaping up to be a tinkerer’s paradise.

Many compare this stage of synthetic biology to the early days of microprocessing (the precursor to computers) when Silicon Valley was a place for young entrepreneurs to go if they needed a cheap place to begin their research or tech business. One such tech entrepreneur, the founder of O’Reilly media, Tim O’Reilly — who also coined the term “open source” — made this comparison in an interview with Wired magazine., O’Reilly further commented on synthetic biology saying, “It’s still in the fun stage.”

Read more