Toggle light / dark theme

Watch SpaceX’s Crew Dragon Dock Autonomously With the ISS

It’s yet another historic moment for the Crew Dragon mission as the docking procedure is quite different this time when compared to previous Dragon missions: “Dragon was basically hovering under the ISS,” said Hans Koenigsmann, vice president of mission assurance at SpaceX during a pre-launch briefing on Thursday. “You can see how it moves back and forth and then the [Canadarm] takes it to a berthing bay.”

In contrast, the Crew Dragon’s docking system is active, he said: “it will plant itself in front of the station and use a docking port on its own, no docking arm required.”

Five days from now, Crew Dragon will undock and makes its long way back to Earth. This time around, it will splash down in the Atlantic Ocean — previous (cargo) Dragon missions have touched down in the Pacific.

Intel Unveils the Intel Neural Compute Stick 2 at Intel AI Devcon Beijing for Building Smarter AI Edge Devices

» Download all images (ZIP, 59 MB)

What’s New: Intel is hosting its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15. The company kicked off the event with the introduction of the Intel® Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. Based on the Intel® Movidius™ Myriad™ X vision processing unit (VPU) and supported by the Intel® Distribution of OpenVINO™ toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

“The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn’t exist before. We’re excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2.” –Naveen Rao, Intel corporate vice president and general manager of the AI Products Group

A new artificial synapse is faster and more efficient than ones in your brain

Biologically inspired circuitry could help build future low-power AI chips—if some obstacles are overcome.

The news: Researchers at the US National Institute of Standards and Technology built a new magnetically controlled electronic synapse, an artificial equivalent of the ones that link neurons. They fire millions of times faster than the ones in your brain, while using 1,000th as much energy (which is also less than any other artificial synapse to date).

Why it matters: Synthetic synapses, which gather multiple signals and fire electronic pulses at a threshold, may be an alternative to transistors in regular processors. They can be assembled to create so-called neuromorphic chips that work more like a brain. Such devices can run artificial neural networks, which underpin modern AI, more efficiently than regular chips. This new synapse could make them even more energy-efficient.

Introduction to Machine Learning

Products Machine Learning Crash Course Courses Crash Course Introduction to Machine Learning This module introduces Machine Learning (ML). Estimated Time: 3 minutesLearning Objectives Recognize the practical benefits of mastering machine learning Understand the philosophy behind machine learning Int…

NASA Will Flight Test a Nuclear Rocket by 2024 and Other High Tech NASA Projects

A portion of NASA’s $21.5 billion 2019 budget is for developing advanced space power and propulsion technology. NASA will spend $176 to $217 million on maturing new technology. There are projects that NASA has already been working on and others that NASA will start and try to complete. There will be propulsion, robotics, materials and other capabilities. Space technology received $926.9 million in NASA’s 2019 budget.

NASA’s space technology projects look interesting but ten times more resources devoted to advancing technological capability if the NASA budget and priorities were changed.

NASA is only spending 1 of its budget on advanced space power and propulsion technology. NASA will spend $3.5 billion in 2019 on the Space Launch System and Orion capsule. SLS will be a heavy rocket which will start off at around the SpaceX Heavy capacity and then get about the SpaceX Super Heavy Starship in payload capacity. However, the SLS will cost about $1 billion to launch each time which is about ten times more than SpaceX costs. NASA is looking at a 2021–2022 first launch and then a 2024 second launch. This would be $19+ billion from 2019–2024 to get two heavy launches and this is if there are no delays.

NVIDIA Transfer Learning Toolkit

The is ideal for deep learning application developers and data scientists seeking a faster and efficient deep learning training workflow for various industry verticals such as Intelligent Video Analytics (IVA) and Medical Imaging. Transfer Learning Toolkit abstracts and accelerates deep learning training by allowing developers to fine-tune NVIDIA provided pre-trained models that are domain specific instead of going through the time-consuming process of building Deep Neural Networks (DNNs) from scratch. The pre-trained models accelerate the developer’s deep learning training process and eliminate higher costs associated with large scale data collection, labeling, and training models from scratch.

The term “transfer learning” implies that you can extract learned features from an existing neural network and transfer these learned features by transferring weights from an existing neural network. The Transfer Learning Toolkit is a Python based toolkit that enables developers to take advantage of NVIDIA’s pre-trained models and offers technical capabilities for developers to add their own data to make the neural networks smarter by retraining and allowing them to adapt to the new network changes. The capabilities to simply add, prune and retrain networks improves the efficiency and accuracy for deep learning training workflow.

/* */