Lightweight, soft, wearable robots that people can wear all day, every day, to help them regain use of their upper extremities.
A big announcement from NASA about landing on the moon is coming on Thursday.
If NASA’s stunning landing of a car-sized robot on Mars didn’t already whet your appetite for space exploration this week, mark your calendar for 2 p.m. EST on Thursday.
That’s when NASA plans to give an update about a program that aims to land privately developed spacecraft on the moon.
Posted in robotics/AI
It’s finally the last month of the year and you know what that means: the holiday flood known as Christmas. Celebrated by many, it’s a time of the year where presents are exchanged and songs are sung. Only, this year, one of those songs won’t be sung (let alone written) by a human being. Nope, this time an artificial intelligence is giving it a go!
In the spirit of Christmas, listen to the carolling tune of an artificial intelligence as it attempts to capture the very essence of what makes this holiday so beloved.
Deep learning has been making it possible for powerful machines to approximate and imitate abilities and techniques once thought to be uniquely human. Mathematicians have struggled to explain how they work so well and may now get some answers by looking outside mathematics and into the nature of the universe.
Microsoft said Thursday it was adopting a set of ethical principles for the use of its facial recognition technology, and urged the government to follow its lead with regulations barring unlawful discrimination and focusing on transparency.
In a blog post, Microsoft president Brad Smith pushed for the government, as well as tech companies, to regulate facial-recognition technology and ensure it “creates broad societal benefits while curbing the risk of abuse.”
“The facial recognition genie, so to speak, is just emerging from the bottle,” Smith said in the post. “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues.”
Over the past few years, classical convolutional neural networks (cCNNs) have led to remarkable advances in computer vision. Many of these algorithms can now categorize objects in good quality images with high accuracy.
However, in real-world applications, such as autonomous driving or robotics, imaging data rarely includes pictures taken under ideal lighting conditions. Often, the images that CNNs would need to process feature occluded objects, motion distortion, or low signal to noise ratios (SNRs), either as a result of poor image quality or low light levels.
Although cCNNs have also been successfully used to de-noise images and enhance their quality, these networks cannot combine information from multiple frames or video sequences and are hence easily outperformed by humans on low quality images. Till S. Hartmann, a neuroscience researcher at Harvard Medical School, has recently carried out a study that addresses these limitations, introducing a new CNN approach for analyzing noisy images.