Toggle light / dark theme

Sony Corp. is planning next spring to roll out a dog-shaped pet robot similar to its discontinued Aibo with updated components that could allow it to control home appliances, people familiar with the matter said.

Sony is preparing for a media event in November to show off the product, the people said. It is unclear whether the new product will use the Aibo name and how much it will cost.

Sony is preparing for a media event in November to show off the product.

Read more

Argo AI LLC, a driverless-car developer controlled by Ford Motor Co., has purchased a 17-year-old company that makes laser systems needed to operate cars without human intervention, an important step for a conventional Detroit auto maker looking to boost its role in shaping the industry’s transformation.

Argo AI said Friday it is buying New Jersey-based Princeton Lightwave Inc. for an undisclosed price, a move that provides Ford with more immediate access to so-called lidar systems that use lasers to create a 3D view of the…

To Read the Full Story.

Read more

LONDON: A humanoid robot took the stage at the Future Investment Initiative yesterday and had an amusing exchange with the host to the delight of hundreds of delegates.

Smartphones were held aloft as Sophia, a robot designed by Hong Kong company Hanson Robotics, gave a presentation that demonstrated her capacity for human expression.

Sophia made global headlines when she was granted Saudi citizenship, making the kingdom the first country in the world to offer its citizenship to a robot.

Read more

Since emerging as a species we have seen the world through only human eyes. Over the last few decades, we have added satellite imagery to that terrestrial viewpoint. Now, with recent advances in Artificial Intelligence (AI), we are not only able to see more from space but to see the world in new ways too.

One example is “Penny”, a new AI platform that from space can predict median income of an area on Earth. It may even help us make cities smarter than is humanly possible. We’re already using machines to make sense of the world as it is; the possibility before us is that machines help us create a world as it should be and have us question the nature of the thinking behind its design.

Penny is a free tool built using high-resolution imagery from DigitalGlobe, income data from the US census, neural network expertise from Carnegie Mellon and intuitive visualizations from Stamen Design. It’s a virtual cityscape (for New York City and St. Louis, so far), where AI has been trained to recognize, with uncanny accuracy, patterns of neighbourhood wealth (trees, parking lots, brownstones and freeways) by correlating census data with satellite imagery.

Read more

A group of astronomers from the universities of Groningen, Naples and Bonn has developed a method that finds gravitational lenses in enormous piles of observations. The method is based on the same artificial intelligence algorithm that Google, Facebook and Tesla have been using in the last years. The researchers published their method and 56 new gravitational lens candidates in the November issue of Monthly Notices of the Royal Astronomical Society.

When a galaxy is hidden behind another galaxy, we can sometimes see the hidden one around the front system. This phenomenon is called a gravitational lens, because it emerges from Einstein’s general relativity theory which says that mass can bend light. Astronomers search for because they help in the research of dark matter.

The hunt for gravitational lenses is painstaking. Astronomers have to sort thousands of images. They are assisted by enthusiastic volunteers around the world. So far, the search was more or less in line with the availability of new images. But thanks to new observations with special telescopes that reflect large sections of the sky, millions of images are added. Humans cannot keep up with that pace.

Read more

EPFL scientists from the Center for Neuroprosthetics have used functional MRI to show how the brain re-maps motor and sensory pathways following targeted motor and sensory reinnervation (TMSR), a neuroprosthetic approach where residual limb nerves are rerouted towards intact muscles and skin regions to control a robotic limb.

Targeted motor and sensory reinnervation (TMSR) is a surgical procedure on patients with amputations that reroutes residual limb nerves towards intact muscles and skin in order to fit them with a limb prosthesis allowing unprecedented control. By its nature, TMSR changes the way the brain processes motor control and somatosensory input; however the detailed brain mechanisms have never been investigated before and the success of TMSR prostheses will depend on our ability to understand the ways the brain re-maps these pathways. Now, EPFL scientists have used ultra-high field 7 Tesla fMRI to show how TMSR affects upper-limb representations in the brains of patients with amputations, in particular in primary and the and regions processing more complex brain functions. The findings are published in Brain.

Targeted motor and sensory reinnervation (TMSR) is used to improve the control of upper limb prostheses. Residual nerves from the amputated limb are transferred to reinnervate and activate new muscle targets. This way, a patient fitted with a TMSR prosthetic “sends” motor commands to the re-innervated muscles, where his or her movement intentions are decoded and sent to the prosthetic limb. On the other hand, direct stimulation of the skin over the re-innervated muscles is sent back to the brain, inducing touch perception on the missing limb.

Read more