Toggle light / dark theme

A new method developed at Cold Spring Harbor Laboratory (CSHL) uses DNA sequencing to efficiently map long-range connections between different regions of the brain. The approach dramatically reduces the cost of mapping brain-wide connections compared to traditional microscopy-based methods.

Neuroscientists need anatomical maps to understand how information flows from one region of the to another. “Charting the cellular connections between different parts of the brain—the connectome—can help reveal how the nervous system processes information, as well as how faulty wiring contributes to and other disorders,” says Longwen Huang, a postdoctoral researcher in CSHL Professor Anthony Zador’s lab. Creating these maps has been expensive and time-consuming, demanding massive efforts that are out of reach for most research teams.

Researchers usually follow neurons’ paths using , which can highlight how individual cells branch through a tangled neural network to find and connect with their targets. But, the palette of fluorescent labels suitable for this work is limited. Researchers can inject different colored dyes into two or three parts of the brain, then trace the connections emanating from those regions. They can repeat this process, targeting new regions, to visualize additional connections. In order to generate a brain-wide map, this must be done hundreds of times, using new research animals each time.

No industry will be spared.


The pharmaceutical business is perhaps the only industry on the planet, where to get the product from idea to market the company needs to spend about a decade, several billion dollars, and there is about 90% chance of failure. It is very different from the IT business, where only the paranoid survive but a business where executives need to plan decades ahead and execute. So when the revolution in artificial intelligence fueled by credible advances in deep learning hit in 2013–2014, the pharmaceutical industry executives got interested but did not immediately jump on the bandwagon. Many pharmaceutical companies started investing heavily in internal data science R&D but without a coordinated strategy it looked more like re-branding exercise with the many heads of data science, digital, and AI in one organization and often in one department. And while some of the pharmaceutical companies invested in AI startups no sizable acquisitions were made to date. Most discussions with AI startups started with “show me a clinical asset in Phase III where you identified a target and generated a molecule using AI?” or “how are you different from a myriad of other AI startups?” often coming from the newly-minted heads of data science strategy who, in theory, need to know the market.

However, some of the pharmaceutical companies managed to demonstrate very impressive results in the individual segments of drug discovery and development. For example, around 2018 AstraZeneca started publishing in generative chemistry and by 2019 published several impressive papers that were noticed by the community. Several other pharmaceutical companies demonstrated impressive internal modules and Eli Lilly built an impressive AI-powered robotics lab in cooperation with a startup.

However, it was not possible to get a comprehensive overview and comparison of the major pharmaceutical companies that claimed to be doing AI research and utilizing big data in preclinical and clinical development until now. On June 15th, one article titled “The upside of being a digital pharma player” got accepted and quietly went online in a reputable peer-reviewed industry journal Drug Discovery Today. I got notified about the article by Google Scholar because it referenced several of our papers. I was about to discard the article as just another industry perspective but then I looked at the author list and saw a group of heavy-hitting academics, industry executives, and consultants: Alexander Schuhmacher from Reutlingen University, Alexander Gatto from Sony, Markus Hinder from Novartis, Michael Kuss from PricewaterhouseCoopers, and Oliver Gassmann from University of St. Gallen.

Japanese researchers have created a smart face mask that has a built in speaker and can translate speech into 8 different languages.

We live in a world full of technology but it was a world without smart masks, until now!

A Japanese technology company Donut Robotics has taken the initiative to create the first smart face masks which connects to your phone. Of course, we couldn’t have battled coronavirus with a simple mask that still does the job of protecting us perfectly well. We as a race need to bring technology into everything and more so if it does an array of extremely important, life-saving things like using a speaker to amplify a person’s voice, covert a person’s speech into text and then translate it into eight different languages through a smartphone app.

No one wants to walk with a walker, but age has a way of making people compromise on their quality of life. The team behind Superflex, which spun out of SRI International in May, thinks there could be another way.

The company is building wearable robotic suits, plus other types of clothing, that can make it easier for soldiers to carry heavy loads or for elderly or disabled people to perform basic tasks. A current prototype is a soft suit that fits over most of the body. It delivers a jolt of supporting power to the legs, arms, or torso exactly when needed to reduce the burden of a load or correct for the body’s shortcomings.

A walker is a “very cost-effective” solution for people with limited mobility, but “it completely disempowers, removes dignity, removes freedom, and causes a whole host of other psychological problems,” SRI Ventures president Manish Kothari says. “Superflex’s goal is to remove all of those areas that cause psychological-type encumbrances and, ultimately, redignify the individual.”

Circa 2017


This bubbly concept car protects more than the driver; its next-generation rubber exterior can save pedestrians, too.

Traditional metal panels are replaced with soft rubber, which absorbs the impact of a collision. The car is also a shapeshifter, meaning that the rubber panels move and flex, forming a more aerodynamic shape.

A new gadget called the OpenCV AI Kit, or OAK, looks to replicate the success of Raspberry Pi and other minimal computing solutions, but for the growing fields of computer vision and 3D perception. Its new multi-camera PCBs pack a lot of capability into a small, open-source unit and are now seeking funding on Kickstarter.

The OAK devices use their cameras and onboard AI chip to perform a number of computer vision tasks, like identifying objects, counting people, finding distances to and between things in frame and more. This info is sent out in polished, ready-to-use form.

Having a reliable, low-cost, low-power-draw computer vision unit like this is a great boon for anyone looking to build a smart device or robot that might have otherwise required several and discrete cameras and other chips (not to mention quite a bit of fiddling with software).

I will post a bunch of links to things people can do at home while under lockdown. This is one of my favorite sites. Feel free to check it out and post from it as well.

Calculus is the key to fully understanding how neural networks function. Go beyond a surface understanding of this mathematics discipline with these free course materials from MIT.