Toggle light / dark theme

Interesting article about nanoswitches and how this technology enables the self-assembly of molecules. This actually does help progress many efforts such as molecular memory devices, photovoltaics, gas sensors, light emission, etc. However, I see the potential use in nanobot technology as it relates to future alignment mappings with the brain.


Molecular nanoswitch: calculated adsorption geometry of porphine adsorbed at copper bridge site (credit: Moritz Müller et al./J. Chem. Phys.)

Technical University of Munich (TUM) researchers have simulated a self-assembling molecular nanoswitch in a supercomputer study.

As with other current research in bottom-up self-assembly nanoscale techniques, the goal is to further miniaturize electronic devices, overcoming the physical limits of currently used top-down procedures such as photolithography.

Another interesting find from KurzweilAI.


Artist’s rendering of bioresorbable implanted brain sensor (top left) connected via biodegradable wires to external wireless transmitter (ring, top right) for monitoring a rat’s brain (red) (credit: Graphic by Julie McMahon)

Researchers at University of Illinois at Urbana-Champaign and Washington University School of Medicine in St. Louis have developed a new class of small, thin electronic sensors that can monitor temperature and pressure within the skull — crucial health parameters after a brain injury or surgery — then melt away when they are no longer needed, eliminating the need for additional surgery to remove the monitors and reducing the risk of infection and hemorrhage.

Similar sensors could be adapted for postoperative monitoring in other body systems as well, the researchers say.

It’s been a weird day for weird science. Not long after researchers claimed victory in performing a head transplant on a monkey, the US military’s blue-sky R&D agency announced a completely insane plan to build a chip that would enable the human brain to communicate directly with computers. What is this weird, surreal future?

It’s all real, believe it or not. Or at least DARPA desperately wants it to be. The first wireless brain-to-computer interface actually popped up a few years ago, and DARPA’s worked on various brain chip projects over the years. But there are shortcomings to existing technology: According to today’s announcement, current brain-computer interfaces are akin to “two supercomputers trying to talk to each other using an old 300-baud modem.” They just aren’t fast enough for truly transformative neurological applications, like restoring vision to a blind person. This would ostensibly involve connect a camera that can transmit visual information directly to the brain, and the implant would translate the data into neural language.

To accomplish this magnificent feat, DARPA is launching a new program called Neural Engineering System Design (NESD) that stands to squeeze some characteristically bonkers innovation out of the science community. In a press release, the agency describes what’s undoubtedly the closest thing to a Johnny Mneumonic plot-line you’ve ever seen in real life. It reads:

Read more

A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

Read more

You may not realise it when your alarm clock forces you into a bleary-eyed stupor first thing in the morning, but there’s actually a complex chemical process going on inside your brain as you wake up. And scientists now think they’ve identified the part of the brain that ends periods of light sleep and brings us into a state of wakefulness.

Researchers from Switzerland focussed their attention on a specific neural circuit located between the brain’s hypothalamus and thalamus. By stimulating this circuit with pulses of light in a group of mice, the academics could prompt rapid awakenings from sleep and then cause prolonged wakefulness.

Why should we be excited about knowing more about how we get yanked out of our regular sleep patterns? The researchers say it could ultimately help those who are trapped in a long-term coma or vegetative state, and on the flip side, could also help those with sleep disorders, or at least give doctors a better idea of why they aren’t sleeping correctly.

Read more

It’s leading to a different way of thinking about computing.

This year’s Detroit auto show is proving that autonomous driving is no longer a techie’s pipe dream. Even holdout Akio Toyoda has finally joined the parade. The self-driving car is coming.

But behind that development is an even more profound change: artificial intelligence (also known as “deep learning”) has gone mainstream. The autonomous driving craze is just the most visible manifestation of the fact that computers now have the capacity to look, learn and react to complex situations as well or better than humans. It’s leading to a profoundly different way of thinking about computing. Instead of writing millions of lines of code to anticipate every situation, these new applications ingest vast amounts of data, recognize patterns, and “learn” from them, much as the human brain does.

Read more

Imagine: What happens when you’re in 2027 on the job competing with other AI; and there is so much information exposed to you that you’re unable to scan & capture all of it onto your various devices and personal robot. And, the non-intrusive nanobot for brain enhancement is still years away. Do you finally take a few hundred dollars & get the latest chip implant requiring a tricky surgery for your brain or wait for the nanobot? These are questions that folks will have to assess for themselves; and this could actually streamline/ condition society into a singularity culture. https://lnkd.in/bTVAjhb


A mom pushes a stroller down the sidewalk while Skyping. A family of four sits at the dinner table plugged into their cell phones with the TV blaring in the background. You get through two pages in a book before picking up your laptop and scrolling through a bottomless stream of new content.

Information technology has created a hyper-connected, over-stimulated, distracted and alienated world. We’ve been living long enough with internet-connected computers and other mobile devices to have begun to take it for granted.

But already the next wave is coming, and it promises to be even more immersive.

This is excellent news for Epilepsy.


Epilepsy, a disorder in which nerve cell activity in the brain is disturbed, causing seizures, is the fourth most common neurological problem, following only migraine, stroke and Alzheimer’s. There is no cure for epilepsy, but there are a variety of treatment options. The disease is estimated to affect 2.2 million people in the U.S., with 150,000 people developing the condition each year.

Personalized medicine Scientists at AES discussed how new technologies, such as gene editing using CRISPR-Cas9, and next-generation sequencing, are empowering them to take a new crack at the human genome and find new ways to diagnose and treat epilepsy.

“Recent advances in DNA sequencing and genomic technologies has facilitated a flood of discoveries in identifying genetic causes of epilepsy. Where we’ve been most successful is in the epileptic encephalopathies (EE),” lead author of one of the studies presented, Candace Myers, a senior at the University of Washington, said in a press conference.

Read more

Gerd predicts that machines will have the same power of a human brain by 2025.


By the year 2025, machines will have the same power as the human brain and in 2051 they will have the power of the entire global population. Does is sound far-fetched? It is certainly a grand claim, but who better to make these kinds of observations than Gerd Leonhard, Futurist, Keynote Speaker, Author and CEO of The Futures Agency.

This was one of the many observations Mr Leonhard spoke to The Malta Independent about ahead of his Keynote Address for The Economist at their ‘The World in 2016 Gala Dinner’ tonight at the Hilton, St Julian’s; where every year they invite experts and innovators from all over the world to share their ‘predictions’ for the coming year.

Mr Gerd Leonhard is a futurist, which means that his main role is to observe and deduce plausible scenarios for the future of an industry, an organization or even a country. He does not call his observations ‘predictions’, but ‘foresight’ which, according to Mr Leonhard, everyone can do but while everyone tends to look at “95% today, while [he looks] at 95% tomorrow.”

Read more