Toggle light / dark theme

IF YOU LIKE THESE VIDEOS, YOU CAN MAKE A SMALL DONATION VIA PAYPAL or BITCOIN LINKS HERE: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_b
3G8SJ4ABT4

(paypal email: [email protected])

MY BITCOIN: 19WnEPwjRHnnfiDpVFUni1A2Amqvxy4gud

Contact email: [email protected]

Apart from the asteroid that wiped out the dinosaurs 65 million years ago, there aren’t many connections between space and dinosaurs outside of the imagination. But that all changed when NASA research scientist Jessie Christiansen brought the two together in an animation on social media this month.

For the past decade, Christiansen has studied planet occurrence rates, or how often and what kinds of planets occur in the galaxy, while studying data from exoplanet hunters such as NASA’s Kepler, K2 and TESS missions.

During a stargazing party at the California Institute of Technology, Christiansen was explaining how young the stars were that they observed. The skywatchers were looking at the Pleiades, a bright young cluster of stars that are some of the youngest in our sky.

At this year’s Intel AI Summit, the chipmaker demonstrated its first-generation Neural Network Processors (NNP): NNP-T for training and NNP-I for inference. Both product lines are now in production and are being delivered to initial customers, two of which, Facebook and Baidu, showed up at the event to laud the new chippery.

The purpose-built NNP devices represent Intel’s deepest thrust into the AI market thus far, challenging Nvidia, AMD, and an array of startups aimed at customers who are deploying specialized silicon for artificial intelligence. In the case of the NNP products, that customer base is anchored by hyperscale companies – Google, Facebook, Amazon, and so on – whose businesses are now all powered by artificial intelligence.

Naveen Rao, corporate vice president and general manager of the Artificial Intelligence Products Group at Intel, who presented the opening address at the AI Summit, says that the company’s AI solutions are expected to generate more than $3.5 billion in revenue in 2019. Although Rao didn’t break that out into specific products sales, presumably it includes everything that has AI infused in the silicon. Currently, that encompasses nearly the entire Intel processor portfolio, from the Xeon and Core CPUs, to the Altera FPGA products, to the Movidius computer vision chips, and now the NNP-I and NNP-T product lines. (Obviously, that figure can only include the portion of Xeon and Core revenue that is actually driven by AI.)

Every 15 minutes, someone in the United States dies of a superbug that has learned to outsmart even our most sophisticated antibiotics, according to a new report from the US Centers for Disease Control and Prevention.

That’s about 35,000 deaths each year from drug-resistant infections, according to the landmark report.

The report places five drug-resistant superbugs on the CDC’s “urgent threat” list — two more germs than were on the CDC’s list in 2013, the last time the agency issued a report on antibiotic resistance.

Reinforcement learning (RL) is a widely used machine-learning technique that entails training AI agents or robots using a system of reward and punishment. So far, researchers in the field of robotics have primarily applied RL techniques in tasks that are completed over relatively short periods of time, such as moving forward or grasping objects.

A team of researchers at Google and Berkeley AI Research has recently developed a new approach that combines RL with learning by imitation, a process called relay policy learning. This approach, introduced in a paper prepublished on arXiv and presented at the Conference on Robot Learning (CoRL) 2019 in Osaka, can be used to train artificial agents to tackle multi-stage and long-horizon tasks, such as object manipulation tasks that span over longer periods of time.

“Our research originated from many, mostly unsuccessful, experiments with very long tasks using (RL),” Abhishek Gupta, one of the researchers who carried out the study, told TechXplore. “Today, RL in robotics is mostly applied in tasks that can be accomplished in a short span of time, such as grasping, pushing objects, walking forward, etc. While these applications have a lot value, our goal was to apply reinforcement learning to tasks that require multiple sub-objectives and operate on much longer timescales, such as setting a table or cleaning a kitchen.”

How do we find other planets?
For life in the universe to be abundant, planets must be abundant. But planets are hard to detect because they are small, and much fainter than the stars they orbit.

How does life begin?
Scientists do not yet know how the first living things arose on Earth. The geological record shows that life appeared on Earth almost as soon as the young planet was cool and stable enough for living things to survive. This suggests that life may exist wherever conditions allow it.

Light can be directed in different directions, usually also back the same way. Physicists from the University of Bonn and the University of Cologne have, however, succeeded in creating a new one-way street for light. They cool photons down to a Bose-Einstein condensate, which causes the light to collect in optical “valleys” from which it can no longer return. The findings from basic research could also be of interest for the quantum communication of the future. The results are published in Science.

A beam is usually divided by being directed onto a partially reflecting mirror: Part of the light is then reflected back to create the mirror image. The rest passes through the mirror. “However, this process can be turned around if the experimental set-up is reversed,” says Prof. Dr. Martin Weitz from the Institute of Applied Physics at the University of Bonn. If the and the part of the light passing through the mirror are sent in the opposite direction, the original light beam can be reconstructed.

The physicist investigates exotic optical quantum states of light. Together with his team and Prof. Dr. Achim Rosch from the Institute for Theoretical Physics at the University of Cologne, Weitz was looking for a new method to generate optical one-way streets by cooling the photons: As a result of the smaller energy of the photons, the light should collect in valleys and thereby be irreversibly divided. The physicists used a Bose-Einstein condensate made of photons for this purpose, which Weitz first achieved in 2010, becoming the first to create such a “super–.”

Alternative facts are spreading like a virus across society. Now it seems they have even infected science—at least the quantum realm. This may seem counter intuitive. The scientific method is after all founded on the reliable notions of observation, measurement and repeatability. A fact, as established by a measurement, should be objective, such that all observers can agree with it.

But in a paper recently published in Science Advances, we show that in the micro-world of atoms and particles that is governed by the strange rules of quantum mechanics, two different observers are entitled to their own facts. In other words, according to our best theory of the building blocks of nature itself, facts can actually be subjective.

Observers are powerful players in the . According to the theory, particles can be in several places or states at once—this is called a superposition. But oddly, this is only the case when they aren’t observed. The second you observe a quantum system, it picks a specific location or state—breaking the superposition. The fact that nature behaves this way has been proven multiple times in the lab—for example, in the famous double slit experiment (see video).