Toggle light / dark theme

In April 2015, a paper by Chinese scientists about their attempts to edit the DNA of a human embryo rocked the scientific world and set off a furious debate. Leading scientists warned that altering the human germ line without studying the consequences could have horrific consequences. Geneticists with good intentions could mistakenly engineer changes in DNA that generate dangerous mutations and cause painful deaths. Scientists — and countries — with less noble intentions could again try to build a race of superhumans.

Human DNA is, however, merely one of many commercial targets of ethical concern. The DNA of every single organism — every plant, every animal, every bacterium — is now fair game for genetic manipulation. We are entering an age of backyard synthetic biology that should worry everybody. And it is coming about because of CRISPRs: clustered regularly interspaced short palindromic repeats.

Discovered by scientists only a few years ago, CRISPRs are elements of an ancient system that protects bacteria and other single-celled organisms from viruses, acquiring immunity to them by incorporating genetic elements from the virus invaders. CRISPRs evolved over millions of years to trim pieces of genetic information from one genome and insert it into another. And this bacterial antiviral defense serves as an astonishingly cheap, simple, elegant way to quickly edit the DNA of any organism in the lab.

Until recently, editing DNA required sophisticated labs, years of experience, and many thousands of dollars. The use of CRISPRs has changed all that. CRISPRs work by using an enzyme — Cas9 — that homes in on a specific location in a strand of DNA. The process then edits the DNA to either remove unwanted sequences or insert payload sequences. CRISPRs use an RNA molecule as a guide to the DNA target. To set up a CRISPR editing capability, a lab only needs to order an RNA fragment (costing about $10) and purchase off-the-shelf chemicals and enzymes for $30 or less.

Read more

Using a double layer of lipids facilitates assembly of DNA origami nanostructures, bringing us one step closer to future DNA nanomachines, as in this artist’s impression (credit: Kyoto University’s Institute for Integrated Cell-Material Sciences)

Kyoto University scientists in Japan have developed a method for creating larger 2-D self-assembling DNA origami nanostructures.

Current DNA origami methods can create extremely small two- and three-dimensional shapes that could be used as construction material to build nanodevices, such as nanomotors, in the future for targeted drug delivery inside the body, for example. KurzweilAI recently covered advanced methods developed by Brookhaven National Laboratory and Arizona State University’s Biodesign Institute.

Read more

Intel Corporation introduced the 6th Generation Intel® Core™ processor family, the company’s best processors ever. The launch marks a turning point in people’s relationship with computers. The 6th Gen Intel Core processors deliver enhanced performance and new immersive experiences at the lowest power levels ever and also support the broadest range of device designs – from the ultra-mobile compute stick, to 2 in 1s and huge high-definition All-in-One desktops, to new mobile workstations.

There are over 500 million computers in use today that are four to five years old or older. They are slow to wake, their batteries don’t last long, and they can’t take advantage of all the new experiences available today.

Built on the new Skylake microarchitecture on Intel’s leading 14nm manufacturing process technology.

Read more

Tech giant, Intel has pledged $50 million (£33 million) to quantum computing research, which could ultimately give us a supercomputer unlike any machine we have known so far.

In an open letter, CEO Brian Krzanich announced a 10-year partnership with Delft University of Technology and TNO, the Dutch Organisation for Applied Research.

Describing the “exciting possibilities” about the research he said: “Quantum computing is one of the more promising areas of long-term research we’ve been exploring in our labs, with some of the smartest engineers in the world.

Read more

“Beyond implementation of quantum communication technologies, nanotube-based single photon sources could enable transformative quantum technologies including ultra-sensitive absorption measurements, sub-diffraction imaging, and linear quantum computing. The material has potential for photonic, plasmonic, optoelectronic, and quantum information science applications…”


In optical communication, critical information ranging from a credit card number to national security data is transmitted in streams of laser pulses. However, the information transmitted in this manner can be stolen by splitting out a few photons (the quantum of light) of the laser pulse. This type of eavesdropping could be prevented by encoding bits of information on quantum mechanical states (e.g. polarization state) of single photons. The ability to generate single photons on demand holds the key to realization of such a communication scheme.

By demonstrating that incorporation of pristine into a silicon dioxide (SiO2) matrix could lead to creation of solitary oxygen dopant state capable of fluctuation-free, room-temperature single , Los Alamos researchers revealed a new path toward on-demand single photon generation. Nature Nanotechnology published their findings.

Read more

The media is all-abuzz with tales of Artificial Intelligence (AI). The provocative two-letter symbol conjures up images of invading autonomous robot drones and Terminator-like machines wreaking havoc on mankind. Then there’s the pervading presence of deep learning and big data, also referred to as artificial intelligence. This might leave some of us wondering, is artificial intelligence one or all of these things?

In that sense, AI leaves a bit of an ambiguous trail – there does not seem to be a clear definition, even amongst scientists and researchers in the field. There are certainly many different branches of AI. I asked Dr. Roger Schank, Professor Emeritus at Northwestern University, for a more clear definition; he told me that artificial intelligence is not big data and deep learning algorithms, at least not in the pure sense of the definition.

Roger emphasizes that intelligence has everything to do with the intersection of learning and interaction and memory. “I will tell you the number one thing people do, it’s pretty obvious – they talk to each other. Guess how hard that is? That is phenomenally hard, that is the subsection of AI called natural language processing, the part that I worked on my whole life, and I understand how far away we are from that.”

Take a “simple” AI concept, such as how to create a computer that plays chess, to better understand the challenge. There are, more or less, two approaches to creating an intelligent machine that can play chess like a champion. The first approach requires programming the computer to predict thousands of moves ahead of time, while the second approach involves building a computer system that tries to imitate a grand master. In the historical pursuit of how to create an artificially intelligent entity, a vast majority of scientists chose the first option of programming based on prediction.

Predicting thousands of moves sounds like a next to impossible task, but scientists have worked over decades to try and do just that, because the second option — imitation through trial and error — is that much more complex. Still, if we want to be creating an artificial intelligence that thinks on its own, Schank argues that the second option is the more promising of the two. “Some of us always saw that this could be the field that could tell us more about people by pursuing method number two (i.e. imitating the grand master),” he says.

Learning more about people while simultaneously developing true intelligence is why Schank entered the field in the first place. “When we talk about Facebook, we might think about the work of AI and face recognition; this technology has certainly come a long way, but that’s a different part of AI. The part of AI that people imagine – the talking and teaching and thinking robots – most people that talk about AI are not really talking about these questions.”

The famous Turing test, run every year with chat bots, is another example of researchers working towards developing an artificial intelligence, and yet every year there is little doubt that the AI is a computer. “This is not AI, this is something (chat bots) that could “fool” someone,” Roger argues.

In order to make a legitimately useful house robot, for example, scientists would have to solve the natural language problem, the memory problem, and the learning problem. If a future household robot makes a bad meal and overcooks the meat, you want the robot to learn from its mistakes and become smarter through experience. Schank describes this seemingly simple act – learning from mistakes and having a sense of awareness about what to do next – as the hallmark of intelligence.

Schank is particularly interested in AI that can help humans by providing more than just a great restaurant review or telling a joke on command. Currently, he is building a program called ExTRA (Experts Telling Relevant Advice), made for and through DARPA, with the objective of “getting a machine to say the right thing to the right person at the right moment. ” Actually, the emphasis is less on a machine and more on an intelligently organized body of knowledge.

Roger tells a real-life analogy, which starts with a ship traveling through the Suez Canal, when the boiler suddenly catches fire. The Captain starts to put out the fire, when his Superior, who is also on board the ship, asks the Captain what he is doing. “Why, putting out the fire of course!” replies the captain. The Superior orders the Captain to continue through the canal without stopping for the fire, explaining that he cannot stop the ship in the Suez Canal, for reasons relating to corrupt Egyptian officials who will not hesitate to take over the ship and cargo. ‘We’re not doing it, keep going,’ orders the Captain’s Superior.

“I thought that was a weird story”, remarks Schank. It was only later, after meeting a real ship captain and giving him the story premise, that Roger was surprised to find the captain arriving at the same conclusion i.e. ‘Full speed ahead!’ This story serves as an illustration of getting a story, from an expert, at a moment when you need it most – a “just in time” story. There is untold value in receiving wisdom in a timely fashion, often expressed in various cultures through oral or written short stories

On the road to getting a machine to be intelligent, one would have to conquer the “expert in the machine.” Working on artificial intelligence that imitates a human mind is not a clean and streamlined process. Developing a machine or system that can imitate a story-telling human does not necessarily equal an intelligent entity. Does the computer really understands what it’s saying? As far as we can deduce, the answer is still no. At some point, Schank remarks, scientists have to know and incorporate the structure of human memory and learning – the key issue in intelligence – in order to build a truly intelligent machine.

Schank does not believe a true, machine-based AI is going to emerge in his lifetime. There simply has not been enough funding in the appropriate direction of AI research. Yet, Roger believes that in the next 10 years, we can replicate a version of ‘just-in-time’ teaching, an indexed system that helps people think through situations in life by providing them with an extension of mind, a tool that increases human decision-making through helpful and relevant stories.