Toggle light / dark theme

Perceiving an object only visually (e.g. on a screen) or only by touching it, can sometimes limit what we are able to infer about it. Human beings, however, have the innate ability to integrate visual and tactile stimuli, leveraging whatever sensory data is available to complete their daily tasks.

Researchers at the University of Liverpool have recently proposed a new framework to generate cross-modal , which could help to replicate both visual and in situations in which one of the two is not directly accessible. Their framework could, for instance, allow people to perceive objects on a screen (e.g. clothing items on e-commerce sites) both visually and tactually.

“In our daily experience, we can cognitively create a visualization of an object based on a tactile response, or a tactile response from viewing a surface’s texture,” Dr. Shan Luo, one of the researchers who carried out the study, told TechXplore. “This perceptual phenomenon, called synesthesia, in which the stimulation of one sense causes an involuntary reaction in one or more of the other senses, can be employed to make up an inaccessible sense. For instance, when one grasps an object, our vision will be obstructed by the hand, but a touch response will be generated to ‘see’ the corresponding features.”

Read more

Quartz’s Lyft story isn’t the most groundbreaking work of journalism in the world, but it’s an interesting proof of concept about how reporters can leverage new tools to pull interesting takeaways from otherwise dry public records — and, perhaps, a preview of things to come.

“This is taking [data journalism] to the next level where we’re trying to get journalists comfortable using computers to do some of this pattern matching, sorting, grouping, anomaly detection — really working with especially large data sets,” John Keefe, Quartz’s technical architect for bots and machine learning, told Digiday back when the Quartz AI Studio first launched.

READ MORE: Here’s what Lyft talks about as risk factors that other companies don’t [Quartz].

Read more

An Israeli spacecraft on its maiden mission to the moon has sent its first selfie back to Earth, mission chiefs said on Tuesday.

The image showing part of the Beresheet spacecraft with Earth in the background was beamed to mission control in Yehud, Israel – 23,360 miles (37,600km) away, the project’s lead partners said.

The partners, NGO SpaceIL and state-owned Israel Aerospace Industries, launched the unmanned Beresheet – Hebrew for Genesis – from Cape Canaveral in Florida on 22 February.

Read more

The Atacama Desert in Chile has been a hotbed of astronomical activity of late. Not only is it the site of Martian environmental simulations to test rover capabilities, it is also home to an project called SPECULOOS (Search for habitable Planets EClipsing ULtra-cOOl Stars).

SPECULOOS is part of the ESO, the European Southern Observatory, and involves the use of four robotic telescopes for planet hunting. In particular, the telescopes look near to ultracool stars and brown dwarfs to search for Earth-sized exoplanets which can then be investigated in more detail by another telescope such as ESO’s forthcoming Extremely Large Telescope (ELT).

The four telescopes of SPECULOOS are named after Jupiter’s moons: Io, Europa, Ganymede, and Callisto, and each has a one meter primary mirror with cameras that are sensitive to near-infrared wavelengths. This accords with the type of light given off by the ultracool stars and brown dwarfs which are the telescopes’ targets.


Spider silk, which is tougher than steel, could be used as artificial muscles for robots, research finds.

Spider silk, already known as one of the strongest materials for its weight, can be used to create artificial muscles or robotic actuators, scientists say.

According to researchers from the Massachusetts Institute of Technology (MIT) in the US, the resilient fibres respond very strongly to changes in humidity.


It’s yet another historic moment for the Crew Dragon mission as the docking procedure is quite different this time when compared to previous Dragon missions: “Dragon was basically hovering under the ISS,” said Hans Koenigsmann, vice president of mission assurance at SpaceX during a pre-launch briefing on Thursday. “You can see how it moves back and forth and then the [Canadarm] takes it to a berthing bay.”

In contrast, the Crew Dragon’s docking system is active, he said: “it will plant itself in front of the station and use a docking port on its own, no docking arm required.”

Five days from now, Crew Dragon will undock and makes its long way back to Earth. This time around, it will splash down in the Atlantic Ocean — previous (cargo) Dragon missions have touched down in the Pacific.

Read more

» Download all images (ZIP, 59 MB)

What’s New: Intel is hosting its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15. The company kicked off the event with the introduction of the Intel® Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. Based on the Intel® Movidius™ Myriad™ X vision processing unit (VPU) and supported by the Intel® Distribution of OpenVINO™ toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

“The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn’t exist before. We’re excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2.” –Naveen Rao, Intel corporate vice president and general manager of the AI Products Group

Read more