Toggle light / dark theme

A team of Stanford researchers have developed a novel means of teaching artificial intelligence systems how to predict a human’s response to their actions. They’ve given their knowledge base, dubbed Augur, access to online writing community Wattpad and its archive of more than 600,000 stories. This information will enable support vector machines (basically, learning algorithms) to better predict what people do in the face of various stimuli.

“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts,” the team wrote in their study. “Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”

In its initial field tests, using an Augur-powered wearable camera, the system correctly identified objects and people 91 percent of the time. It correctly predicted their next move 71 percent of the time.

Read more

K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.

Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.

As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Read more

And, this will only be the beginning because with the lightering weight materials that have been develop we will see some amazing VR suits coming.


Virtual reality could one day incorporate all the senses, creating a rich and immersive experience, but existing virtual reality headsets only simulate things you can see and hear. But now, a group of engineers wants to help people “touch” virtual environments in a more natural way, and they built a wearable suit to do just that.

Designed by Lucian Copeland, Morgan Sinko and Jordan Brooks while they were students at the University of Rochester, in New York, the suit looks something like a bulletproof vest or light armor. Each section of the suit has a small motor in it, not unlike the one that makes a mobile phone vibrate to signal incoming messages. In addition, there are small accelerometers embedded in the suit’s arms.

The vibrations provide a sense of touch when a virtual object hits that part of the body, and the accelerometers help orient the suit’s limbs in space, the researchers said. [Photos: Virtual Reality Puts Adults in a Child’s World].

Read more

Your smartwatch screen may soon be rather more impressive: This 4.7-inch organic LCD display is flexible enough to wrap right around a wrist.

Produced by FlexEnable from the UK, the screen squeezes a full-color organic LCD onto a sheet that measures just one hundredth of an inch thick, which makes it highly conformable. The company claims that it can easily run vivid colour and smooth video content, which is a sight better than most wearables.

It’s not the first flexible display, of course. LG already has an 18-inch OLED panel that has enough flexibility to roll into a tube that’s an inch across. But this concept—which, sadly, is all it is right now—is the first large, conformable OLCD designed for wearables that we’ve seen.

Read more

The jury may still be out on the usefulness of the Internet of Things, but payments giant Visa is 100 percent sure that it doesn’t want to miss out. Today, it announced plans to push Visa payments into numerous fields. We’re talking “wearables, automobiles, appliances, public transportation services, clothing, and almost any other connected device” — basically anything that can or will soon connect to the internet.

Visa imagines a future where you’ll be able to pay for parking from your car dashboard or order a grocery delivery from your fridge. It makes sense, then, that Samsung is one of the first companies to sign up to the Visa Ready Program, alongside Accenture, universal payment card company Coin and Fit Pay. Chronos and Pebble are also working to integrate secure payments inside their devices.

To show off the technology, which works with any credit card, Visa or otherwise, the company has teamed up with Honda to develop an in-car app that helps automate payments. Right now they have two demos, the first of which concerns refueling. It warns the driver when their fuel level is low and directs them to the nearest gas station. Once the car arrives at the pump, the app calculates the expected cost and allows the driver to pay for the fuel without having to leave the vehicle.

Read more

For now, wearable tech clothing seems to be effective for only outdoor/sports wear and occupational wear; I believe that more needs to be done in working with design houses such as Marc Jacobs, Versace, Dior, etc. What I have learned is the major design houses are not totally bought into wearable tech cloth; and tech will need to understand how to make the fabric technology more attractive to the name brand design houses especially if wearable tech clothing wishes to obtain a larger adoption of wearable tech clothing.


“If anyone could pull it off, it would be you” is not the most affirmative compliment.

Read more

Seeking to “push the limits of what humans can do,” researchers at Georgia Tech have developed a wearable robotic limb that transforms drummers into three-armed cyborgs.

The remarkable thing about this wearable arm, developed at GT’s Center for Music Technology, is that it’s doing a lot more than just mirroring the movements of the drummer. It’s a “smart arm” that’s actually responding to the music, and performing in a way that compliments what the human player is doing.

The two-foot long arm monitors the music in the room, so it can improvise based on the beat and rhythm. If the drummer is playing slowly, for example, the arm will mirror the tempo.

Read more

Fujitsu Laboratories today announced that it has developed deep learning technology that can analyze time-series data with a high degree of accuracy. Demonstrating promise for Internet-of-Things applications, time-series data can also be subject to severe volatility, making it difficult for people to discern patterns in the data. Deep learning technology, which is attracting attention as a breakthrough in the advance of artificial intelligence, has achieved extremely high recognition accuracy with images and speech, but the types of data to which it can be applied is still limited. In particular, it has been difficult to accurately and automatically classify volatile time-series data–such as that taken from IoT devices–of which people have difficulty discerning patterns.

Now Fujitsu Laboratories has developed an approach to that uses advanced to extract geometric features from time-series data, enabling highly accurate classification of volatile time-series. In benchmark tests held at UC Irvine Machine Learning Repository that classified time-series data captured from gyroscopes in wearable devices, the new technology was found to achieve roughly 85% accuracy, about a 25% improvement over existing technology. This technology will be used in Fujitsu’s Human Centric AI Zinrai artificial intelligence technology. Details of this technology will be presented at the Fujitsu North America Technology Forum (NAFT 2016), which will be held on Tuesday, February 16, in Santa Clara, California.

Background

In recent years, in the field of , which is a central technology in artificial intelligence, deep learning technology has been attracting attention as a way to automatically extract feature values needed to interpret and assess phenomena without rules being taught manually. Especially in the IoT era, massive volumes of time-series data are being accumulated from devices. By applying deep learning to this data and classifying it with a high degree of accuracy, further analyses can be performed, holding the prospect that it will lead to the creation of new value and the opening of new business areas.

Read more