Toggle light / dark theme

Over the past 100 million years, mammals have adapted to nearly every environment on Earth. Scientists with the Zoonomia Project have been cataloging the diversity in mammalian genomes by comparing DNA sequences from 240 species that exist today, from the aardvark and the African savanna elephant to the yellow-spotted rock hyrax and the zebu.

This week, in several papers in a special issue of Science, the Zoonomia team has demonstrated how can not only shed light on how certain species achieve extraordinary feats, but also help scientists better understand the parts of our genome that are functional and how they might influence health and disease.

In the new studies, the researchers identified regions of the genomes, sometimes just single letters of DNA, that are most conserved, or unchanged, across mammalian species and millions of years of evolution—regions that are likely biologically important. They also found part of the genetic basis for uncommon mammalian traits such as the ability to hibernate or sniff out faint scents from miles away. And they pinpointed species that may be particularly susceptible to extinction, as well as genetic variants that are more likely to play causal roles in rare and common human diseases.

In today’s well-researched world, death is one of those unknown barriers. It was pursued by British scientists… The color of death is a faint blue.

British scientists got a firsthand look at what it’s like to die. They took a close look at the worm in the experiment. During this stage of passage, cells will perish. It starts a chain reaction that leads to the creature’s extinction and destroys cell connections.

Gloomy radiation is induced by necrosis, which destroys calcium in your system, according to a research published in the journal PLoS Biology. Professor David Gems of University College London oversaw the study.

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.

This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.

A new study reporting this threat is based on X-ray observations of 31 and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.

Some of Daniel Schmarchtenberger’s friends say you can be “Schmachtenberged”. It means realising that we are on our way to self-destruction as a civilisation, on a global level. This is a topic often addressed by the American philosopher and strategist, in a world with powerful weapons and technologies and a lack of efficient governance. But, as the catastrophic script has already started to be written, is there still hope? And how do we start reversing the scenario?

After lightning struck a tree in New Port Richey, Florida, a team of scientists from the University of South Florida (USF) discovered that this strike led to the formation of a new phosphorous material in a rock. This is the first time such a material has been found in solid form on Earth and could represent a member of a new mineral group.

“We have never seen this material occur naturally on Earth – minerals similar to it can be found in meteorites and space, but we’ve never seen this exact material anywhere,” said study lead author Matthew Pasek, a geoscientist at USF.

According to the researchers, high-energy events such as lightning can sometimes cause unique chemical reactions which, in this particular case, have led to the formation of a new material that seems to be transitional between space minerals and minerals found on Earth.

We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.

Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ► https://youtu.be/91TRVubKcEM

Nick Bostrom, a professor at Oxford University and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility.

Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty.

Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future.

0:00 Smarter than humans.

The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).

Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.


As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?

And exploration of the Vulnerable World Hypothesis solution to the Fermi Paradox.

And exploration of the possibility of finding fossils of alien origin right here on the surface of the earth.

My Patreon Page:

https://www.patreon.com/johnmichaelgodier.

My Event Horizon Channel: