Casper the Robot is making a difference at a hospital in Lisbon: http://cnnmon.ie/2y2YRSk
Imagine printing off a wristband that charges your smartphone or electric car with cheap supplies from a local hardware store.
That’s the direction materials research is heading at Brunel University London where scientists have become the first to simply and affordably 3D print a flexible, wearable ‘battery’.
The technique opens the way for novel designs for super-efficient, wearable power for phones, electric cars, medical implants like pacemakers and more.
LONDON: A humanoid robot took the stage at the Future Investment Initiative yesterday and had an amusing exchange with the host to the delight of hundreds of delegates.
Smartphones were held aloft as Sophia, a robot designed by Hong Kong company Hanson Robotics, gave a presentation that demonstrated her capacity for human expression.
Sophia made global headlines when she was granted Saudi citizenship, making the kingdom the first country in the world to offer its citizenship to a robot.
Since emerging as a species we have seen the world through only human eyes. Over the last few decades, we have added satellite imagery to that terrestrial viewpoint. Now, with recent advances in Artificial Intelligence (AI), we are not only able to see more from space but to see the world in new ways too.
One example is “Penny”, a new AI platform that from space can predict median income of an area on Earth. It may even help us make cities smarter than is humanly possible. We’re already using machines to make sense of the world as it is; the possibility before us is that machines help us create a world as it should be and have us question the nature of the thinking behind its design.
Penny is a free tool built using high-resolution imagery from DigitalGlobe, income data from the US census, neural network expertise from Carnegie Mellon and intuitive visualizations from Stamen Design. It’s a virtual cityscape (for New York City and St. Louis, so far), where AI has been trained to recognize, with uncanny accuracy, patterns of neighbourhood wealth (trees, parking lots, brownstones and freeways) by correlating census data with satellite imagery.
A group of astronomers from the universities of Groningen, Naples and Bonn has developed a method that finds gravitational lenses in enormous piles of observations. The method is based on the same artificial intelligence algorithm that Google, Facebook and Tesla have been using in the last years. The researchers published their method and 56 new gravitational lens candidates in the November issue of Monthly Notices of the Royal Astronomical Society.
When a galaxy is hidden behind another galaxy, we can sometimes see the hidden one around the front system. This phenomenon is called a gravitational lens, because it emerges from Einstein’s general relativity theory which says that mass can bend light. Astronomers search for gravitational lenses because they help in the research of dark matter.
The hunt for gravitational lenses is painstaking. Astronomers have to sort thousands of images. They are assisted by enthusiastic volunteers around the world. So far, the search was more or less in line with the availability of new images. But thanks to new observations with special telescopes that reflect large sections of the sky, millions of images are added. Humans cannot keep up with that pace.
EPFL scientists from the Center for Neuroprosthetics have used functional MRI to show how the brain re-maps motor and sensory pathways following targeted motor and sensory reinnervation (TMSR), a neuroprosthetic approach where residual limb nerves are rerouted towards intact muscles and skin regions to control a robotic limb.
Targeted motor and sensory reinnervation (TMSR) is a surgical procedure on patients with amputations that reroutes residual limb nerves towards intact muscles and skin in order to fit them with a limb prosthesis allowing unprecedented control. By its nature, TMSR changes the way the brain processes motor control and somatosensory input; however the detailed brain mechanisms have never been investigated before and the success of TMSR prostheses will depend on our ability to understand the ways the brain re-maps these pathways. Now, EPFL scientists have used ultra-high field 7 Tesla fMRI to show how TMSR affects upper-limb representations in the brains of patients with amputations, in particular in primary motor cortex and the somatosensory cortex and regions processing more complex brain functions. The findings are published in Brain.
Targeted motor and sensory reinnervation (TMSR) is used to improve the control of upper limb prostheses. Residual nerves from the amputated limb are transferred to reinnervate and activate new muscle targets. This way, a patient fitted with a TMSR prosthetic “sends” motor commands to the re-innervated muscles, where his or her movement intentions are decoded and sent to the prosthetic limb. On the other hand, direct stimulation of the skin over the re-innervated muscles is sent back to the brain, inducing touch perception on the missing limb.