Dawn is an inspirational marriage of technology and humanity.
China is getting set to launch the first-ever surface mission to the moon’s far side.
The robotic Chang’e 4 mission is scheduled to launch atop a Long March 3B rocket on Friday (Dec. 7) at around 1:30 p.m. EST (1830 GMT; 2:30 a.m. on Dec. 8 local China time).
If all goes according to plan, Chang’e 4’s lander-rover duo will touch down within the moon’s South Pole‐Aitken (SPA) basin after a 27-day flight, then study both the surface and subsurface of this region. [China’s Moon Missions Explained (Infographic)].
DeepMind’s artificial intelligence programme AlphaZero is now showing signs of human-like intuition and creativity, in what developers have hailed as ‘turning point’ in history.
The computer system amazed the world last year when it mastered the game of chess from scratch within just four hours, despite not being programmed how to win.
But now, after a year of testing and analysis by chess grandmasters, the machine has developed a new style of play unlike anything ever seen before, suggesting the programme is now improvising like a human.
Posted in robotics/AI, space
NASA’s Hubble Space Telescope was the first telescope designed to be serviced in orbit. Join Hubble astronauts live as they discuss servicing from the innovative Robotics Operations Center. Plus a robot demo!
You never know how far your #SpaceApps solution will go! Gema knows that first hand. Hear about her project Deep Asteroid, which was a 2016 finalist, and how she used NASA data and the open-source tool Tensor Flow.
When NASA issued a worldwide challenge to help them better track the asteroids and comets that surround Earth, Gema Parreño answered the call. She used #TensorFlow, Google’s machine learning tool, to create a program called Deep Asteroid, which helps identify and track Near Earth Objects.
Special thanks to the Royal Observatory of Madrid. Learn more about them here: https://www.esmadrid.com/en/tourist-information/real-observa…gle.com%2F
https://paper.li/e-1437691924#/&h=AT3mdHzXuCejMgVQDYy6JiVw58…e-BeRlnE2g
“Our world is changing so fast… this year we have sessions on artificial intelligence, genetics and what the future holds for our planet. There is a new term now — cli-fi. We have a beautiful session on cli-fi, on what would happen if bees disappear.
”I feel at this moment in our country it is very very important to give impetus to empirical thinking,” the author of ”Paro: Dreams of Passion” said.
Nobel Laureate Venki Ramakrishnan will speak on the ‘Importance of Science’, cosmologist Priyamvada Natarajan on ‘Mapping the Heavens’ and professor of AI Toby Walsh on ‘How the Future is Now’ among others.
Interest in artificial neural networks has skyrocketed over the years as companies like Google and Facebook have invested heavily in machines that can think like humans. Today, an AI can recognize objects in photos or help generate realistic computer speech, but Nvidia has successfully built a neural network that can create an entire virtual world with the help of a game engine. The researchers speculate this “hybrid” approach could one day make AI-generated games a reality.
The system build by Nvidia engineers uses many of the same parts as other AI experiments, but they’re arranged in a slightly different way. To goal of the project was to create a simple driving simulator, but without using any humans to design the environment.
Like all neural networks, the system needed training data. Luckily, work on self-driving cars has ensured there’s plenty of training footage of a vehicle driving around city streets. The team used a segmentation network to recognize different object categories like trees, cars, sky, buildings, and so on. The segmented data is what Nvidia fed into its model, which used a generative adversarial network to improve the accuracy of the final output. Essentially, one network created rendered scenes, and a second network would pass or fail them. Over time, the network is tuned to only create believable data.