Toggle light / dark theme

Dear readers,

My son Ethan Kurzweil — who is a partner at Bessemer Ventures Partners — tracks the future of web innovation, social and legal concerns about privacy, and start-ups who have an edge with their business or consumer applications, like team sourcing or software-as-a-service.

He appeared on C • NBC business affairs show Power Lunch. Episode debated the recent news about the US government and law enforcement asking Apple to release private data on an iPhone used by terrorists.

Read more

A realistic article on AI — especially around AI being manipulated by others for their own gain which I have also identified as the real risks with AI.


Artificial intelligence (AI), once the seeming red-headed stepchild of the scientific community, has come a long way in the past two decades. Most of us have reconciled with the fact that we can’t live without our smartphones and Siri, and AI’s seemingly omnipotent nature has infiltrated the nearest and farthest corners of our lives, from robo-advisors on Wall Street and crime-spotting security cameras, to big data analysis by Google’s BigQuery and Watson’s entry into diagnostics in the medical field.

In many unforeseen ways, AI is helping to improve and make our lives more efficient, though the reverse degeneration of human economic and cultural structures is also a potential reality. The Future of Life Institute’s tagline sums it up in succinct fashion: “Technology is giving life the potential to flourish like never before…or to self-destruct.” Humans are the creators, but will we always have control of our revolutionary inventions?

To much of the general public, AI is AI is AI, but this is only part truth. Today, there are two primary strands of AI development — ANI (Artificial Narrow Intelligence) and AGI (Artificial General Intelligence). ANI is often termed “weak AI” and is “the expert” of the pair, using its intelligence to perform specific functions. Most of the technology with which we surround ourselves (including Siri) falls into the ANI bucket. AGI is the next generation of ANI, and it’s the type of AI behind dreams of building a machine that achieves human levels of consciousness.

Read more

AT&T is going “over the top” with television.

In the fourth quarter of this year, AT&T will start selling cable-like bundles of TV to people across the country through a new app. Subscribers won’t need an AT&T wireless phone or an AT&T broadband connection at home.

It’ll be like Netflix — download the app, sign up, type in a credit card number, and start streaming a TV show.

Read more

What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.

Find closed captions and translated subtitles in many languages at http://www.ted.com/translate

Follow TED news on Twitter: http://www.twitter.com/tednews

A team of Stanford researchers have developed a novel means of teaching artificial intelligence systems how to predict a human’s response to their actions. They’ve given their knowledge base, dubbed Augur, access to online writing community Wattpad and its archive of more than 600,000 stories. This information will enable support vector machines (basically, learning algorithms) to better predict what people do in the face of various stimuli.

“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts,” the team wrote in their study. “Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”

In its initial field tests, using an Augur-powered wearable camera, the system correctly identified objects and people 91 percent of the time. It correctly predicted their next move 71 percent of the time.

Read more

I love high capacity things. So when Samsung announced it’s producing 256 GB flash storage that can be used in mobile devices, I swooned. The memory is two times faster than the previous generation of Universal Flash Storage (UFS) memory, meaning that phones will not only have greater storage capacities, but also breeze reading and writing operations.

Nonetheless, there are probably still a lot of you thinking this isn’t a huge deal. You might say that the most popular Android phones already support microSD expandable memory, or that Android 6.0 Marshmallow supports adoptive memory, making it easier for your phone to read and write to expandable storage. But that would be missing the point.

Expandable storage has always been a bandage on a much greater problem plaguing Android phones: the cost of high capacity flash memory was too high and the size was too bulky to include in older smartphones. Plus, expandable memory has never performed nearly as well as internal UFS memory. Although Android 6.0 Marshmallow supports a new adoptive memory feature that basically treats external memory as internal memory, neither of Android’s two biggest vendors, LG or Samsung, support the feature in their new smartphones.

Read more

K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.

Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.

As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Read more