Toggle light / dark theme

A cheaper way for VR wear for consumers; are consumers potentially being ripped off?


The Oculus Rift finally went on sale, but that $600 price tag is a bit too steep for some to justify. Fortunately, VR doesn’t have to be expensive. Take this virtual reality cycling rig that someone created for $40.

It’s the work of Paul Yan, who’s the animation director at Toys for Bob — the studio that developed Skylanders and kicked off the toys-to-life revolution. He previously figured out how to build an “Arduino thing” that could talk to a smartphone via Bluetooth LE and he wanted to put his contraption to good use.

He whipped up a cityscape in Unity, set his mountain bike up on an indoor trainer, and marked the rear tire with a small piece of paper. The Arduino tracks pedaling speed by keeping tabs on how long it takes the paper to make a revolution, and it relays that information back to the phone, which is then clipped into a VR headset. The phone tells the virtual bike to keep pace: pedal faster, and the street scene moves by more quickly. If you’d rather cruise around the neighborhood, pedal more slowly.

I can now see it; Nov. I go into my local Verizon store; and Pepper the robot greets me and takes my name and helps me get in line for the next service tech or takes me to show me the latest devices.


Pepper, the lovable humanoid robot, is preparing to take a step into entrepreneurship and staff its own smartphone shop in Japan.

Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.

The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.

Read more

Cannot wait for the new AR contacts.


NEW YORK, Jan. 21, 2016 /PRNewswire/ — This new IDTechEx report is focused on how the market for smart glasses and contact lenses is going to evolve in the next decade, based on the exciting research and developments efforts of recent years along with the high visibility some projects and collaborations have enjoyed. The amount of visibility this space is experiencing is exciting developers of a range of allied technologies into fast-tracking/focusing their efforts, as well as creating devices and components designed specifically to serve this emerging industry.

Some of the newest devices that have ignited significant interest in smart eyewear are going above and beyond the conventional definition of a smart object; they are in effect, portable, wearable computers with a host of functionalities, specially designed apps etc. that add new ways for the wearer to interact with the world along with smartphone capabilities, health tracking options and many other features. The features of some of the more advanced devices have been based on and have sparked worldwide innovation efforts aiming to create an ecosystem of components that will enable what is bound to be a revolution in form factor for wearables.

User interface is probably one of the most significant features in this revolution. As interfacing with computers undergoes a constant evolution, allowing for wider adoption as interaction becomes more “natural”, smartglasses are bringing about the next big step in this ever-changing space. From keyboards to touchscreens to cameras & positioning/location/infrared sensors, a new wave of innovation is making interfacing with computers gesture-based, and nowhere else is that more obvious than in eye-worn computing.

Go Hubo


The so-called ‘fourth industrial revolution’ will bring ever faster cycles of innovation, posing huge challenges to companies, workers, governments and societies alike Implantable mobile phones. 3D-printed organs for transplant. Clothes and reading-glasses connected to the Internet.

Such things may be science fiction today but they will be scientific fact by 2025 as the world enters an era of advanced robotics, artificial intelligence and gene editing, according to executives surveyed by the World Economic Forum (WEF).

Nearly half of those questioned also expect an artificial intelligence machine to be sitting on a corporate board of directors within the next decade.

Virginia Tech’s Professor Doug Bowman comes to Apple to make VR. This should be very interesting since he won the research grant to work on the “Hololens” — could be interesting.


According to a report in the Financial Times, Apple has hired one of the leading experts on virtual and augmented reality — Virginia Tech computer science professor Doug Bowman. He was recently listed among grant winners for HoloLens research projects and is skilled in creating 3D user interfaces, reports Endgadget. He has also co-authored a book called 3D User Interfaces Theory and Practice.

He’s been working on technologies such as wearable displays and full surround display prototypes at Virginia Tech.

Apple has been building up on its VR arsenal in the recent past with a string of acquisitions in the domain, along with reports of patents and other significant hires. While much has been happening behind closed doors, analysts predict that in 2016, that is going to change. Apple will become “very aggressive on the virtual/augmented reality front through organic as well as acquisitive means in 2016 as this represents a natural next generation consumer technology that plays well into its unrivaled iPhone ecosystem,” FBR & Co analyst Daniel Ives said in an earlier report.

Amazing stuff!


Image-analyzing software has been a possibility for a while now. It’s how Google’s reverse image search works. It’s how you are able to deposit a check via ATM or even smartphone. Image creation is a newer development. Google’s Deep Dream, released last year, recreates images that are fed to it by compositing other images, shapes, and colors into a twisted version of the original. The obvious next step here is software that can create an image from a description, which WordsEye has gotten to first.

WordsEye is a new software that converts language to 3-D images. In its current beta state, WordsEye’s images are constructed from pre-existing, manipulatable 3-D models, textures, and light sources. The results are surreal, cartoon-y and a little unsettling. But don’t let this detract from such an advancement in artificial intelligence.

A basic description for WordsEye to interpret might look like this:

I’m standing on the corner of 15th Street and Third Avenue in New York City, and I’m freezing. But my iPhone is on fire. After connecting to one of LinkNYC’s gigabit wireless hotspots, the futuristic payphone replacements that went live for beta testing this morning, I’m seeing download speeds of 280 Mbps and upload speeds of 317 Mbps (based on Speedtest’s benchmark). To put it in perspective, that’s around ten times the speed of the average American home internet connection (which now sits at 31 Mbps). And to top it all off, LinkNYC doesn’t cost you a thing.

Read more