Toggle light / dark theme

Brain organoids have become increasingly used systems allowing 3D-modeling of human brain development, evolution, and disease. To be able to make full use of these modeling systems, researchers have developed a growing toolkit of genetic modification techniques. These techniques can be applied to mature brain organoids or to the preceding embryoid bodies (EBs) and founding cells. This review will describe techniques used for transient and stable genetic modification of brain organoids and discuss their current use and respective advantages and disadvantages. Transient approaches include adeno-associated virus (AAV) and electroporation-based techniques, whereas stable genetic modification approaches make use of lentivirus (including viral stamping), transposon and CRISPR/Cas9 systems. Finally, an outlook as to likely future developments and applications regarding genetic modifications of brain organoids will be presented.

The development of brain organoids (Kadoshima et al., 2013; Lancaster et al., 2013) has opened up new ways to study brain development and evolution as well as neurodevelopmental disorders. Brain organoids are multicellular 3D structures that mimic certain aspects of the cytoarchitecture and cell-type composition of certain brain regions over a particular developmental time window (Heide et al., 2018). These structures are generated by differentiation of induced pluripotent stem cells (iPSCs) or embryonic stem cells (ESCs) into embryoid bodies followed by, or combined, with neural induction (Kadoshima et al., 2013; Lancaster et al., 2013). In principle, two different classes of brain organoid protocols can be distinguished, namely: (i) the self-patterning protocols which produce whole-brain organoids; and (ii) the pre-patterning protocols which produce brain region-specific organoids (Heide et al., 2018).

It is with sadness — and deep appreciation of my friend and colleague — that I must report the passing of Vernor Vinge.


The technological singularity —or simply the singularity[1] —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

“Gigantic aluminum spiders” might sound like the stuff of nightmares or an antagonist in an anime series. However, for one Norwegian company, they could be the future of the wind energy industry.

WindSpider, a tech company that focuses on onshore and offshore wind turbines, has developed a new self-erecting crane system that could revolutionize the way turbines are built.

The WindSpider crane uses the tower of the wind turbine itself as part of the crane while performing installation, maintenance, repowering, and decommissioning of bottom-fixed and floating offshore wind turbines. This allows operations to be performed on floating turbines on site and at sea and for a lifting capacity that can be scaled to over 1,500 tons with no height limitations.

The first patient of Elon Musk’s Neuralink has been presented to the public. Noland Arbaugh had all but given up playing Civilization VI ever since a diving accident dislocated two vertebrae in his cervical spinal cord, leaving him paralyzed from the shoulders down.

When confined to his wheel chair, the 29-year-old American is totally dependent on the care of his parents, who need to shift his weight ever few hours to avoid pressure sores from sitting too long in the same position.

Moving a cursor on a display furthermore required the use of a mouth stick, a specialized assistive device used by quadriplegics.

Open any astronomy textbook to the section on white dwarf stars and you’ll likely learn that they are “dead stars” that continuously cool down over time. New research published in Nature is challenging this theory, with the University of Victoria (UVic) and its partners using data from the European Space Agency’s Gaia satellite to reveal why a population of white dwarf stars stopped cooling for more than eight billion years.

“We discovered the classical picture of all white dwarfs being dead stars is incomplete,” says Simon Blouin, co-principal investigator and Canadian Institute of Theoretical Astrophysics National Fellow at UVic.

“For these white dwarfs to stop cooling, they must have some way of generating extra energy. We weren’t sure how this was happening, but now we have an explanation for the phenomenon.”

Biosensing technology developed by engineers has made it possible to create gene test strips that rival conventional lab-based tests in quality. When the pandemic started, people who felt unwell had to join long queues for lab-based PCR tests and then wait for two days to learn if they had the COVID-19 virus or not.

In addition to significant inconvenience, a major drawback was the substantial and expensive logistics needed for such laboratory tests, while testing delays increased the risk of disease spread.

Now a team of bio]medical engineers at UNSW Sydney have developed a new technology offering test strips which are just as accurate as the lab-based detection. And according to research published today in Nature Communications, it’s not just public health that the technology may benefit.

A groundbreaking nanosurgical tool — about 500 times thinner than a human hair — could be transformative for cancer research and give insights into treatment resistance that no other technology has been able to do, according to a new study.

The high-tech double-barrel nanopipette, developed by University of Leeds scientists, and applied to the global medical challenge of cancer, has — for the first time — enabled researchers to see how individual living cancer cells react to treatment and change over time — providing vital understanding that could help doctors develop more effective cancer medication.

The tool has two nanoscopic needles, meaning it can simultaneously inject and extract a sample from the same cell, expanding its potential uses. And the platform’s high level of semi-automation has sped up the process dramatically, enabling scientists to extract data from many more individual cells, with far greater accuracy and efficiency than previously possible, the study shows.