Toggle light / dark theme

Neat!


Grabity: a wearable haptic interface for simulating weight and grasping in VR (credit: UIST 2017)

Drawing in air, touchless control of virtual objects, and a modular mobile phone with snap-in sections (for lending to friends, family members, or even strangers) are among the innovative user-interface concepts to be introduced at the 30th ACM User Interface Software and Technology Symposium (UIST 2017) on October 22–25 in Quebec City, Canada.

Driverless cars need superhuman senses. And for the most part they seem to have them, in the form of lidar, radar, ultrasound, near-infrared, and other sensors. But regular cameras, often forgotten about in favor of more exotic technologies, are incredibly important given they’re used to collect data that’s used to, say, read the messages on road signs. So Sony’s new image sensor is designed to give regular camera vision a boost, too.

The new $90 IMX324 has an effective resolution of only 7.42 megapixels, which sounds small compared to your smartphone camera. But with about three times the vertical resolution of most car camera sensors, it packs a punch. It can see road signs from 160 meters away, has low-light sensitivity that allows it to see pedestrians in dark situations, and offers a trick that captures dark sections at high sensitivity but bright sections at high resolution in order to max out image recognition. The image above shows how much sharper the new tech than its predecessor from the same distance.

Don’t expect a beefed-up camera to eliminate the need for other sensors, though: even with strong low-light performance, cameras don’t work well in the dark, and they can’t offer the precise ranging abilities of other sensors. That means lidar and radar will remain crucial complements to humble optical cameras, however fancy they get.

Read more

White-collar automation has become a common buzzword in debates about the growing power of computers, as software shows potential to take over some work of accountants and lawyers. Artificial-intelligence researchers at Google are trying to automate the tasks of highly paid workers more likely to wear a hoodie than a coat and tie—themselves.

In a project called AutoML, Google’s researchers have taught machine-learning software to build machine-learning software. In some instances, what it comes up with is more powerful and efficient than the best systems the researchers themselves can design. Google says the system recently scored a record 82 percent at categorizing images by their content. On the harder task of marking the location of multiple objects in an image, an important task for augmented reality and autonomous robots, the auto-generated system scored 43 percent. The best human-built system scored 39 percent.

Such results are significant because the expertise needed to build cutting-edge AI systems is in scarce—even at Google. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” said Google CEO Sundar Pichai last week, briefly namechecking AutoML at a launch event for new smartphones and other gadgets. “We want to enable hundreds of thousands of developers to be able to do it.”

Read more

Have you ever had the perfect photo ruined by someone with their eyes closed in the shot? You could fix the problem with a bit of cloning from an alternate shot using a photo editing app—but Adobe is making the process much easier in the new 2018 version of Photoshop Elements with a dedicated ‘Open Closed Eyes’ feature.

You can spend an entire career using Photoshop and still not master the software’s every last feature, but that complexity can be intimidating to the millions of amateur photographers born from the advent of affordable digital-SLRs, and even smartphones. That’s where Photoshop Elements comes in. It’s a lighter version of Photoshop with training wheels that simplifies many popular photo editing techniques. A better way to describe it might be as a version of Photoshop your parents could stumble their way through with minimal phone calls to you.

Read more

Researchers from the University of Nebraska-Lincoln, Harvard Medical School and MIT have designed a smart bandage that could eventually heal chronic wounds or battlefield injuries with every fiber of its being.

The bandage consists of electrically conductive fibers coated in a gel that can be individually loaded with infection-fighting antibiotics, tissue-regenerating growth factors, painkillers or other medications.

A microcontroller no larger than a postage stamp, which could be triggered by a smartphone or other wireless device, sends small amounts of voltage through a chosen fiber. That voltage heats the fiber and its hydrogel, releasing whatever cargo it contains.

Read more

Payne and another Google employee demonstrated a conversation between someone speaking Swedish and another person responding in English.

During the demonstration, one employee, speaking Swedish, had Pixel Buds and the Pixel phone. When the phone was addressed in English, the earbuds translated the phrase into Swedish in her ear. The Swedish speaker then spoke back in Swedish through the earbuds by pressing on the right bud to summon Google Assistant translated that Swedish reply back into an English phrase, which was played through the phone’s speakers so the English speaker could hear.

While this idea might sound far-fetched, Google CEO Sundar Pichai told investors in January that Google Translate was set to make big leaps this year.

Read more