Toggle light / dark theme

When I work on AI today and looking at it’s fundamental principles; it is not that much different from the work that I and another team mate many years ago did around developing a RT Proactive Environmental Response System. Sure there are some differences between processors, etc. However, the principles are the same when you consider some of the extremely complex algorithms that we had to develop to ensure that our system could proactively interrupt patterns and proactively act on it’s own analysis. We did have a way to override any system actions.


These questions originally appeared on Quorathe knowledge sharing network where compelling questions are answered by people with unique insights.

Answers by Neil Lawrence, Professor of Machine Learning at the University of Sheffield, on Quora.

Q: What do you think about the impact of AI and ML on the job market in 10, 20, 50 years from now?

US Government’s cool $100 mil in brain research. As we have been highlighting over the past couple of months that the US Government’s IARPA and DARPA program’s have and intends to step up their own efforts in BMIs and robotics for the military; I am certain that this research will help their own efforts and progress.


Intelligence project aims to reverse-engineer the brain to find algorithms that allow computers to think more like humans.

By Jordana Cepelewicz on March 8, 2016.

Read more

Don’t let the title mislead you — Quantum is not going to require AI to operate or develop it’s computing capabilities. However, what is well known across Quantum communities is that AI will greatly benefit from the processing capabilities & performance of Quantum Computing. There has been a strong interest in marrying the 2 together. However, Quantum maturity gap and timing has not made that possible until recently resulting from the various discoveries in microchip development, programming language (Quipper) development, Q-Dots Silicon wafers, etc.


Researchers at the University of Vienna have created an algorithm that helps plan experiments in this mind-boggling field.

Read more

Glad to see this article get published because it echoes many of the concerns established around China and Russia governments and their hackers having their infrastructures on Quantum before US, Europe, and Canada. Computer scientists at MIT and the University of Innsbruck say they’ve assembled the first five quantum bits (qubits) of a quantum computer that could someday factor any number, and thereby crack the security of traditional encryption schemes.


Shor’s algorithm performed in a system less than half the size experts expected.

Read more

“Notice for all Mathmaticians” — Are you a mathmatician who loves complex algorithems? If you do, IARPA wants to speak with you.


Last month, the intelligence community’s research arm requested information about training resources that could help artificially intelligent systems get smarter.

It’s more than an effort to build new, more sophisticated algorithms. The Intelligence Advanced Research Projects Activity could actually save money by refining existing algorithms that have been previously discarded by subjecting them to more rigorous training.

Nextgov spoke with Jacob Vogelstein, a program manager at IARPA who specializes in in applied neuroscience, about the program. This conversation has been edited for length and clarity.

Brown University engineers have developed a new technique to help researchers understand how cells move through complex tissues in the body. They hope the tool will be useful in understanding all kinds of cell movements, from how cancer cells migrate to how immune cells make their way to infection sites.

The technique is described in a paper published in the Proceedings of the National Academy of Sciences.

The traditional method for studying cell movement is called traction force microscopy (TFM). Scientists take images of cells as they move along 2-D surfaces or through 3-D gels that are designed as stand-ins for actual body tissue. By measuring the extent to which cells displace the 2-D surface or the 3-D gel as they move, researchers can calculate the forces generated by the cell. The problem is that in order to do the calculations, the stiffness and other mechanical properties of artificial tissue environment must be known.

Read more

A team of Stanford researchers have developed a novel means of teaching artificial intelligence systems how to predict a human’s response to their actions. They’ve given their knowledge base, dubbed Augur, access to online writing community Wattpad and its archive of more than 600,000 stories. This information will enable support vector machines (basically, learning algorithms) to better predict what people do in the face of various stimuli.

“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts,” the team wrote in their study. “Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”

In its initial field tests, using an Augur-powered wearable camera, the system correctly identified objects and people 91 percent of the time. It correctly predicted their next move 71 percent of the time.

Read more

K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.

Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.

As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Read more