Menu

Blog

Page 4275

Apr 11, 2022

Astronomers detect a powerful space laser that is 5 billion light-years away

Posted by in category: space

An international team of astronomers led by Dr. Marcin Glowacki, who previously worked at the Inter-University Institute for Data Intensive Astronomy and the University of the Western Cape in South Africa, has made an impressive discovery from 5 billion light-years away, according to a statement released by the institution on Thursday.

Using the MeerKAT telescope in South Africa, the researchers discovered a powerful radio-wave laser, called a ‘megamaser’, that is the most distant megamaser of its kind ever detected. Its light has traveled 58 thousand billion billion (58 followed by 21 zeros) kilometers to Earth.

When galaxies collide…

Continue reading “Astronomers detect a powerful space laser that is 5 billion light-years away” »

Apr 11, 2022

Innovative agricultural photovoltaic projects and technology

Posted by in categories: food, solar power, sustainability

Agricultural PV (or agrivoltaics) is the simultaneous use of land for both agriculture and solar power generation. This year‘s Intersolar Europe in Munich will put a major focus on this topic.

Apr 11, 2022

Hypersonic Aircraft Planned to Connect Tokyo to Los Angeles in an Hour

Posted by in category: transportation

Hypersonic air travel promises point-to-point passenger and freight interplanetary connectivity. The market is heating up.

Apr 11, 2022

Top 5 Metaverse Jobs that the World Should Get Prepared For

Posted by in categories: employment, internet

Metaverse is the internet’s next big thing. Soon people will be preparing for metaverse jobs that will prove beneficial for tech enthusiasts who are interested in this domain.

Apr 11, 2022

On using the multiverse to avoid the paradoxes of time travel

Posted by in categories: cosmology, time travel

John Abbruzzese, On using the multiverse to avoid the paradoxes of time travel, Analysis, Volume 61, Issue 1, January 2001, Pages 36–38, https://doi.org/10.1093/analys/61.1.36.

Apr 11, 2022

GitHub can now alert of supply-chain bugs in new dependencies

Posted by in category: security

GitHub can now block and alert you of pull requests that introduce new dependencies impacted by known supply chain vulnerabilities.

This is achieved by adding the new Dependency Review GitHub Action to an existing workflow in one of your projects. You can do it through your repository’s Actions tab under Security or straight from the GitHub Marketplace.

It works with the help of an API endpoint that will help you understand the security impact of dependency changes before adding them to your repository at every pull request.

Apr 11, 2022

AI maps psychedelic ‘trip’ experiences to regions of the brain — opening new route to psychiatric treatments

Posted by in categories: biotech/medical, robotics/AI

The Neuro-Network.

𝐒𝐭𝐮𝐝𝐲 𝐦𝐚𝐩𝐬 𝐩𝐬𝐲𝐜𝐡𝐞𝐝𝐞𝐥𝐢𝐜-𝐢𝐧𝐝𝐮𝐜𝐞𝐝 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐢𝐧 𝐜𝐨𝐧𝐬𝐜𝐢𝐨𝐮𝐬𝐧𝐞𝐬𝐬 𝐭𝐨 𝐬𝐩𝐞𝐜𝐢𝐟𝐢𝐜 𝐫𝐞𝐠𝐢𝐨𝐧𝐬 𝐨𝐟 𝐭𝐡𝐞 𝐛𝐫𝐚𝐢𝐧

Continue reading “AI maps psychedelic ‘trip’ experiences to regions of the brain — opening new route to psychiatric treatments” »

Apr 11, 2022

Why Does Gravity Travel at the Speed of Light?

Posted by in category: physics

As with so much in physics, it has to do with Einstein’s theory of general relativity.

Apr 11, 2022

Chronology protection conjecture

Posted by in category: futurism

It has been suggested that an advanced civilization might have the technology to warp spacetime so that closed timelike curves would appear, allowing travel into the past. This paper examines this possibility in the case that the causality violations appear in a finite region of spacetime without curvature singularities. There will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon. This shows that one cannot create closed timelike curves with finite lengths of cosmic string.

Apr 11, 2022

Google AI Researchers Propose a Meta-Algorithm, Jump Start Reinforcement Learning, That Uses Prior Policies to Create a Learning Curriculum That Improves Performance

Posted by in categories: information science, policy, robotics/AI

In the field of artificial intelligence, reinforcement learning is a type of machine-learning strategy that rewards desirable behaviors while penalizing those which aren’t. An agent can perceive its surroundings and act accordingly through trial and error in general with this form or presence – it’s kind of like getting feedback on what works for you. However, learning rules from scratch in contexts with complex exploration problems is a big challenge in RL. Because the agent does not receive any intermediate incentives, it cannot determine how close it is to complete the goal. As a result, exploring the space at random becomes necessary until the door opens. Given the length of the task and the level of precision required, this is highly unlikely.

Exploring the state space randomly with preliminary information should be avoided while performing this activity. This prior knowledge aids the agent in determining which states of the environment are desirable and should be investigated further. Offline data collected by human demonstrations, programmed policies, or other RL agents could be used to train a policy and then initiate a new RL policy. This would include copying the pre-trained policy’s neural network to the new RL policy in the scenario where we utilize neural networks to describe the procedures. This process transforms the new RL policy into a pre-trained one. However, as seen below, naively initializing a new RL policy like this frequently fails, especially for value-based RL approaches.

Google AI researchers have developed a meta-algorithm to leverage pre-existing policy to initialize any RL algorithm. The researchers utilize two procedures to learn tasks in Jump-Start Reinforcement Learning (JSRL): a guide policy and an exploration policy. The exploration policy is an RL policy trained online using the agent’s new experiences in the environment. In contrast, the guide policy is any pre-existing policy that is not modified during online training. JSRL produces a learning curriculum by incorporating the guide policy, followed by the self-improving exploration policy, yielding results comparable to or better than competitive IL+RL approaches.