Toggle light / dark theme

John Danaher, Senior Lecturer in Law at the National University of Ireland (NUI) Galway:

“Understanding Techno-Moral Revolutions”

Talk held on August 24, 2021 for Colloquium of the Center for Humans and Machines at the Max Planck Institute for Human Development, Berlin.

It is common to use ethical norms and standards to critically evaluate and regulate the development and use of emerging technologies like AI and Robotics. Indeed, the past few years has seen something of an explosion of interest in the ethical scrutiny of technology. What this emerging field of machine ethics tends to overlook, however, is the potential to use the development of novel technologies to critically evaluate our existing ethical norms and standards. History teaches us that social morality (the set of moral beliefs and practices shared within a given society) changes over time. Technology has sometimes played a crucial role in facilitating these historical moral revolutions. How will it do so in the future? Can we provide any meaningful answers to this question? This talk will argue that we can and will outline several tools for thinking about the mechanics of technologically-mediated moral revolutions.

Summary: After discovering the importance of cell metabolism in neurogenesis, researchers were able to increase the number of neurons in the brains of adult and elderly mice.

Source: University of Geneva.

Some areas of the adult brain contain quiescent, or dormant, neural stem cells that can potentially be reactivated to form new neurons. However, the transition from quiescence to proliferation is still poorly understood.

The Big Bang is the name we have given to the moment at which the Universe began. While the idea is well known, it is often badly misunderstood. Even people with a good grasp of science have misconceptions about it. For instance, a common question is, “Where did the Big Bang happen?” And the answer to that question is a surprising one. So, let’s dive into it and try to understand where the misunderstanding arises.

When people are told of the Big Bang, they are commonly told that “all of the mass of the universe was packed into a point with zero volume called a singularity.” The singularity then “exploded,” expanding and cooling and eventually resulting in the Universe we see today. People draw from their own experience and analogize the Big Bang with something like a firecracker or a grenade — an object that sits in a location, then explodes, dispersing debris into existing space. This is a completely natural and reasonable mental image. It is also completely wrong.

The theory that describes the Big Bang is Einstein’s general theory of relativity. In it, Einstein describes gravity as the very shape of space as it bends and stretches. Near a star or planet, space is distorted; far from any celestial body, space is flat. If space is malleable, as the theory says it is, it can also be compressed or stretched.

Researchers have created a roadmap for how to build tiny biocomputers out of human neurons or brain cells.

“We can use a culture of the human brain to show something which is not just living cells. We can show that this is learning, this is memorising, this is making decisions, it is possibly even at some point, ‘sentient’ in the sense that it can sense its environment,” Professor Thomas Hartung, a Johns Hopkins ‘organoids’ researcher, told Cosmos.

“We are the explorers who have stumbled into a completely new field.”

So much here I never knew:


Dr. Axel Montagne is a chancellor’s fellow and group leader at the UK Dementia Research Institute at the University of Edinburgh Centre for Clinical Brain Sciences. His group aims to understand how, when, and where critical components of the blood-brain barrier become dysfunctional preceding dementia and in the earliest stages of age-related cognitive decline. With this knowledge, they hope to develop precise treatments targeting brain vasculature to protect brain function.

More importantly his work, and that of his colleagues, provide a critical lens through which to view the contributions of vascular dysfunction (or, conversely, vascular health – if we choose to preserve it) as a critical common thread in dementia and neurodegeneration.

There is this picture—you may have seen it. It is black and white and has two silhouettes facing one another. Or maybe you see the black vase with a white background. But now, you likely see both.

It is an example of a visual illusion that reminds us to consider what we did not see at first glance, what we may not be able to see, or what our experience has taught us to know—there is always more to the picture or maybe even a different image to consider altogether. Researchers are finding the process in our that allows us to see these visual distinctions may not be happening the same way in the brains of children with . They may be seeing these illusions differently.

“How our brain puts together pieces of an object or visual scene is important in helping us interact with our environments,” said Emily Knight, MD, Ph.D., assistant professor of Neuroscience and Pediatrics at the University of Rochester Medical Center, and first author on a study out today in the Journal of Neuroscience. “When we view an object or picture, our brains use processes that consider our experience and contextual information to help anticipate , address ambiguity, and fill in the missing information.”

Neuroscience research just got a little bit easier, thanks to the release of tens of thousands of images of fruit fly brain neurons generated by Janelia’s FlyLight Project Team.

Over eight years, the FlyLight Project Team and collaborators dissected, labeled, and imaged the neurons of more than 74,000 fruit fly brains, taken from more than 5,000 different genetically modified fly strains.

Now, these images are being made freely available, enabling scientists to quickly and easily find the neurons they need to test theories about how the works.

Trying to make computers more like human brains isn’t a new phenomenon. However, a team of researchers from Johns Hopkins University argues that there could be many benefits in taking this concept a bit more literally by using actual neurons, though there are some hurdles to jump first before we get there.

In a recent paper, the team laid out a roadmap of what’s needed before we can create biocomputers powered by human brain cells (not taken from human brains, though). Further, according to one of the researchers, there are some clear benefits the proposed “organoid intelligence” would have over current computers.

“We have always tried to make our computers more brain-like,” Thomas Hartung, a researcher at Johns Hopkins University’s Environmental Health and Engineering department and one of the paper’s authors, told Ars. “At least theoretically, the brain is essentially unmatched as a computer.”