Toggle light / dark theme

My new story for my #transhumanism column at Psychology Today on Direct Neurofeedback:


Transhumanism—the movement of using science and technology to improve the human being—covers many different fields of research. There are exoskeleton suits to help the disabled; there are stem cell treatments to cure disease; there are robots and AI to perform human chores. The field is wide open and booming as humanity uses more and more tech in its world.

It’s not that often I get to participate directly in these radical technologies, but I did so recently when Grant Rudolph, Clinical Director at Echo Rock Neurotherapy in Mill Valley, California invited me to try his Direct Neurofeedback techniques. Via his computer and EEG wire hookups, Mr. Rudolph echoed my brainwave information back into my head at an imperceptible level. I did two sessions of Direct Neurofeedback.

At first, I was skeptical that I’d even feel anything since the EEG information can’t be detected by the skin as a sensation, but within five minutes of having the wires stuck onto my forehead, I began feeling different. I can compare it to a light dose of a recreational drug: I felt happy, content, and worry-free. I also felt more introspective than normal. The feedback only took a few seconds, and after about 15 minutes, I seemed to notice the world’s colors were sharper and my hearing was more acute. The heightened awareness and calming effect lasted about 24 hours and then most of it gradually wore off. Some of the clarity must still be working, because getting things done sometimes still seems easier. I’m told that continued sessions would make this state of clarity my new norm.

At well over $150,000 per appliance, the Volta GPU based DGX appliances from Nvidia, which take aim at deep learning with framework integration and 8 Volta-accelerated nodes linked with NVlink, is set to appeal to the most bleeding edge of machine learning shops.

Nvidia has built its own clusters by stringing several of these together, just as researchers at Tokyo Tech have done with the Pascal generation systems. But one of the first commercial customers for the Volta based boxes is the Center for Clinical Data Science, which is part of the first wave of hospitals set to use deep learning for MR and CT image analysis.

The center, which is based in Cambridge, Massachusetts, has secured a whopping four DGX-1 Volta appliances, which sport the latest GPUs with eight per node with the NVlink interconnect. The Next Platform talked with Neil Tenenholtz, senior data scientist at the center, about where deep learning will yield results for hospitals and medical research and about their early experiences with the four machines.

Read more

Doing well on such a challenge would appear to require significant advances in AI technology, making it a potentially powerful way to advance the field. In this video, Carissa Schoenick discusses “Moving Beyond the Turing Test with the Allen AI Science Challenge,” in the September 2017 CACM.

http://ow.ly/pyjO30f7EpM

Read more

The new self-propelled, cancer-seeking bacteriobot swims right into the tumor and zaps it with a deadly payload of cancer drugs.

The recently perfected #bacteriobot holds ‘a lot of promise’ in treating #cancer says a physician. Cancer patients at a hospital in Montreal may be the first to be treated with these #nanorobots built out of bacteria.


Summary: The recently perfected bacteriobot holds ‘a lot of promise’ in treating cancer says a physician. Cancer patients at a hospital in Montreal may be the first to be treated with nanorobots built out of bacteria. The new self-propelled, cancer-seeking bacteriobot swims right into the tumor and zaps it with a deadly payload of cancer drugs. [Cover image: Getty Images/iStock.]

Google’s Futurist Ray Kurzweil once said that within decades, we will have nanobots, swimming through our veins keeping us healthy. The tiny robots will keep us healthy by correcting DNA errors, removing toxins, extending our memories and zapping cancer. The Futurist said that back in 2007, and his prophecy is becoming a reality, at least in the treatment of cancer.

Current international shipping law states that ocean-going vessels must be properly crewed, so fully autonomous, unmanned ships aren’t allowed in international waters. As such, the Yara Birkeland will have to operate close to the Norweigan coast at all times, carrying out regular short journeys between three ports in the south of the country.

But change is afoot in the maritime sector, and earlier this year the UN’s International Maritime Organisation (IMO) began discussions that could allow unmanned ships to operate across oceans. This raises the prospect of crewless “ghost” ships crisscrossing the ocean, with the potential for cheaper shipping with fewer accidents.

Several Japanese shipping firms, for example, are reportedly investing hundreds of millions of dollars in the technology. And British firm Rolls-Royce demonstrated the world’s first remote-controlled unmanned commercial ship earlier this year.

Read more

Shai Ben-David, Professor at the University of Waterloo, gave Machine Learning Course composed of 23 Lectures (CS 485/685) at the University of Waterloo on Jan 14, 2015…

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.

Shai Ben-David holds a PhD in mathematics from the Hebrew University is Jerusalem. He has held postdoctoral positions at the University of Toronto in both the Mathematics and CS departments. He was a professor of computer science at the Technion in Haifa, Israel. Ben-David has held visiting positions at the Australian National University and Cornell University, and since 2004 has been a professor of computer science at the University of Waterloo in Canada.

Read more

Trying to outrun the expiration of Moore’s Law.


As conventional microchip design reaches its limits, DARPA is pouring money into the specialty chips that might power tomorrow’s autonomous machines.

The coming AI revolution faces a big hurdle: today’s microchips.

In addition to opening the lab, Facebook has committed about $5.75 million to support AI research at McGill, the University of Montreal, the Montreal Institute of Learning Algorithms and the Canadian Institute for Advanced Research, the company said in a Facebook blog post on Friday. Alphabet and Microsoft also have invested in AI at McGill and the University of Montreal.

The move comes a week after IBM said it would spend $240 million on a new AI lab in partnership with the Massachusetts Institute of Technology.


Facebook will support Canadian AI research in addition to setting up a lab in Montreal.

Read more

The Facebook AI lab has developed an animated bot that learned to respond naturally to human facial movements during conversation, so much so that volunteers rated its reactions as natural as a human’s.

Most of us are able to intuitively understand human facial expressions: in conversation, millions of tiny muscle movements change our eyes, mouth, head position and more to signal to our fellow humans what we’re thinking. These unconscious movements are what make us human, but they also make it exceptionally difficult for robots to imitate us — and make those that try seem creepy, as they enter the “uncanny valley.”

Read more