Toggle light / dark theme

Recent Study on the effects of Dark Energy suggests expansion of the universe could in fact slow down and eventually reverse itself to eventually result in a “Big Crunch” billions of years from now. Such a theory had been proposed in the past but was previously rejected due to observed accelerating expansion attributed to Dark Energy.


The universe may stop expanding in just 100 million years if dark energy decays over time, a new study suggests.

This year, the Alcor Life Extension Foundation is celebrating its 50th year. To mark the occasion, we are holding a conference on June 3–5, 2022, at the Scottsdale Resort in Scottsdale, Arizona.


The conference itself will be Alcor’s first major in-person gathering in seven years, so we’re going to “go big.” We expect members, prospective members, and others interested in life extension and the far future to turn out enthusiastically. We hope not only that our attendees will enjoy hearing from and interacting with you, but also that you may find the experience enjoyable. There is no organization quite like Alcor, after all, and very few opportunities to explore cryonics and its implications for society now and in the far future.

Ex-NASA astronaut says we must fix Earth’s big problems before we colonize other planets.


He tells Inverse that humans should seek to colonize distant planets. But before that happens, he acknowledges the tremendous amount of work that needs to be done on Earth first.

“We need to spread human presence throughout the Solar System and beyond, but we need to do it as ambassadors of a thriving planet,” Garan says. “We can’t do it as refugees escaping environmental disaster.”

The comments come as figures like SpaceX CEO Elon Musk and Blue Origin founder Jeff Bezos call on humanity to establish permanent settlements in space. Musk has repeatedly claimed he wants to establish a city on Mars by 2050, while Bezos wants to build giant, floating cities in Earth’s orbit.

Reimagining A Healthier Future for All — Dr. Pat Verduin PhD, Chief Technology Officer, Colgate, discussing the microbiome, skin and oral care, and healthy aging from a CPG perspective.


Dr. Patricia Verduin, PhD, (https://www.colgatepalmolive.com/en-us/snippet/2021/circle-c…ia-verduin) is Chief Technology Officer for the Colgate-Palmolive Company where she provides leadership for product innovation, clinical science and long-term research and development across their Global Technology Centers’ Research & Development pipeline.

Dr. Verduin joined Colgate Palmolive in 2007 as Vice President, Global R&D. Previously she served as Vice President, Scientific Affairs, for the Grocery Manufacturers Association, and from 2000 to 2006, she held the position of Vice President, Research & Development, at ConAgra Foods.

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language-processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14 in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student in and computer science who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank Ph.D. ‘16, an assistant professor at the University of California at Los Angeles. The work was supervised by McGovern Institute for Brain Research investigator Ev Fedorenko, a cognitive neuroscientist who studies how the uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.