Toggle light / dark theme

Even if we don’t create a true AI for a thousand years, these algorithms, pared with our exponentially increasing computing power, could have much of the same effect on our civilization as the more traditional, AI-centric type Singularity. Very, very soon.


A schematic diagram of machine learning for materials discovery (credit: Chiho Kim, Ramprasad Lab, UConn)

Replacing inefficient experimentation, UConn researchers have used machine learning to systematically scan millions of theoretical compounds for qualities that would make better materials for solar cells, fibers, and computer chips.

Led by UConn materials scientist Ramamurthy ‘Rampi’ Ramprasad, the researchers set out to determine which polymer atomic configurations make a given polymer a good electrical conductor or insulator, for example.

Let’s step back and consider the broader digital technology landscape for one moment. We have built our past, current, and new technology off of a digital foundation with machine language of standard not very complex algorithms that processes 0s & 1s which has been around since the 50’s. So, not too shock by this article; in fact we may not see a major leap in Humanoid Robots possibly until Quantum hits the mainstream. Quantum holds a lot of promise; however, it’s still too early to know for sure.


Artificial intelligence may be coming to your IT department sooner than you think, but not the way you might imagine.

Read more

ARPA-E creating sustainable energy crops for the production of renewable transportation fuels from biomass.


In Washington, the DOE’s ARPA-E TERRA projects seek to accelerate the development of sustainable energy crops for the production of renewable transportation fuels from biomass. To accomplish this, the projects uniquely integrate agriculture, information technology, and engineering communities to design and apply new tools for the development of improved varieties of energy sorghum. The TERRA project teams will create novel platforms to enhance methods for crop phenotyping (identifying and measuring the physical characteristics of plants) which are currently time-intensive and imprecise.

The new approaches will include automated methods for observing and recording characteristics of plants and advanced algorithms for analyzing data and predicting plant growth potential. The projects will also produce a large public database of sorghum genotypes, enabling the greater community of plant physiologists,

Bioinformaticians and geneticists to generate breakthroughs beyond TERRA. These innovations will accelerate the annual yield gains of traditional plant breeding and support the discovery of new crop traits that improve water productivity and nutrient use efficiency needed to improve the sustainability of bioenergy crops.

Read more

Automated online advice platforms, the so-called robo advisors, have long implied the use of algorithms eliminates conflicts of interest. It’s a premise that’s gained traction with both consumers and regulators. But a new report by the Financial Industry Regulatory Authority casts doubt on their ability to do just that.

With robo advisors like Schwab Intelligent Portfolios, Betterment and Wealthfront now managing billions of dollars worth of client assets, FINRA investigated these online advice providers. The regulator released a report Tuesday that evaluated several key service areas including governance and supervision, the suitability of recommendations, conflicts of interest, customer risk profiles and portfolio rebalancing.

FINRA found that while digital advice will likely play an increasingly important role in wealth management, investors should be aware that conflicts of interest can exist even in providers powered by algorithms. Specifically, the advice consumers receive depends largely on the digital advice provider’s investment approach and the underlying assumptions used.

Read more

Very concerning: 72% of all India companies were hacked in 2015. How many were hosting consumer and business data for non-Indian companies say US or European companies?


According to KPMG’s, Cyber Crime Survey Report 2015, around 72 per cent of the companies in India have faced cyber-attacks in the year 2015. In India a spate of cyber security issues have been witnessed like the Gaana.com or Ola Cabs apps being hacked. Such issues have raised the alarm for the whole enterprise community. And it doesn’t seem to be stopping here. According to a report from McAfee Labs, the number of cyber attacks where malware holds user data hostage is expected to grow in 2016 as hackers target more companies and advanced software is able to compromise more types of data. In many cases the objective would be financial gain or corporate espionage, either ways, resulting in heavy losses for the enterprise.

Today, no single new age enterprise is immune to cyber threats. The humongous amount of information popping out of various social and mobile platforms continues to add to organizations’ vulnerabilities, making them attractive targets for complex cyber crimes.

For today’s digital businesses, a lot of value is tied to data and any loss to it can put their whole reputation at stake. Hence more and more companies are finding themselves terrorized by cyber threat agents who are looking for new, sophisticated routes to gain access to confidential business data. Burgess Cooper, Partner, Information & Cyber Security Advisory Services, EY, points out, “Technology is increasing a company’s vulnerability to be attacked through increased online presence, broader use of social media, mass adoption of mobile devices, increased usage of cloud services and the collection/analysis of big data.”

Making the most of the low light in the muddy rivers where it swims, the elephant nose fish survives by being able to spot predators amongst the muck with a uniquely shaped retina, the part of the eye that captures light. In a new study, researchers looked to the fish’s retinal structure to inform the design of a contact lens that can adjust its focus.

Imagine a that autofocuses within milliseconds. That could be life-changing for people with presbyopia, a stiffening of the eye’s that makes it difficult to focus on close objects. Presbyopia affects more than 1 billion people worldwide, half of whom do not have adequate correction, said the project’s leader, Hongrui Jiang, Ph.D., of the University of Wisconsin, Madison. And while glasses, conventional contact lenses and surgery provide some improvement, these options all involve the loss of contrast and sensitivity, as well as difficulty with night vision. Jiang’s idea is to design contacts that continuously adjust in concert with one’s own cornea and lens to recapture a person’s youthful vision.

The project, for which Jiang received a 2011 NIH Director’s New Innovator Award (an initiative of the NIH Common Fund) funded by the National Eye Institute, requires overcoming several engineering challenges. They include designing the lens, algorithm-driven sensors, and miniature electronic circuits that adjust the shape of the lens, plus creating a power source — all embedded within a soft, flexible material that fits over the eye.

Read more

Google has smartened up several of its products with a type of artificial intelligence called deep learning, which involves training neural networks on lots of data and then having them make predictions about new data. Google Maps, Google Photos, and Gmail, for example, have been enhanced with this type of technology. The next service that could see gains is Google Translate.

Well, let me back up. Part of Google Translate actually already uses deep learning. That would be the instant visual translations you can get on a mobile device when you hold up your smartphone camera to the words you want to translate. But if you use Google Translate to just translate text, you know that the service isn’t always 100 percent accurate.

In an interview at the Structure Data conference in San Francisco today, Jeff Dean, a Google senior fellow who worked on some of Google’s core search and advertising technology and is now the head of the Google Brain team that works on deep learning, said that his team has been working with Google’s translation team to scale out experiments with translation based on deep learning. Specifically, the work is based on the technology depicted in a 2014 paper entitled “Sequence to Sequence Learning with Neural Networks.”

Read more

Google, AI, and Quantum — Google believes deep learning is not suitable on Quantum. Not so sure that I agree with this position because deep learning in principle is “a series of complex algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures” — the beauty around quantum is it’s performance in processing of vast sets of information and complex algorithms. Maybe they meant to say at this point they have not resolved that piece for AI.


Artificial intelligence is one of the hottest subjects these days, and recent advances in technology make AI even closer to reality than most of us can imagine.

The subject really got traction when Stephen Hawking, Elon Musk and more than 1,000 AI and robotics researchers signed an open letter issuing a warning regarding the use of AI in weapons development last year. The following month, BAE Systems unveiled Taranis, the most advanced autonomous UAV ever created; there are currently 40 countries working on the deployment of AI in weapons development.

Those in the defense industry are not the only ones engaging in an arms race to create advanced AI. Tech giants Facebook, Google, Microsoft and IBM are all engaging in various AI-initiatives, as well as competing on developing digital personal assistants like Facebook’s M, Cortana from Microsoft and Apple’ Siri.

Allen Institute working with Baylor on reconstructing neuronal connections.


The Intelligence Advanced Research Projects Activity (IARPA) has awarded an $18.7 million contract to the Allen Institute for Brain Science, as part of a larger project with Baylor College of Medicine and Princeton University, to create the largest ever roadmap to understand how the function of networks in the brain’s cortex relates to the underlying connections of its individual neurons.

The project is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.

“This effort will be the first time that we can physically look at more than a thousand connections between neurons in a single cortical network and understand how those connections might allow the network to perform functions, like process visual information or store memories,” says R. Clay Reid, Ph.D., Senior Investigator at the Allen Institute for Brain Science, Principal Investigator on the project.