Toggle light / dark theme

Four virtual reality (VR) veterans from Discovery Digital, Oculus Story Studio and Lightshed officially launched their new company out of stealth mode in San Francisco this week. Dubbed Tomorrow Never Knows, the new studio aims to use virtual and augmented reality as well as other emerging technologies including artificial intelligence for groundbreaking storytelling projects, said co-founder and CEO Nathan Brown in an interview with Variety this week.

“The thesis behind the company is to consistently violate the limits of storytelling, forcing the creation of new tools, methodologies and workflow and to do this intentionally so we create original creative and technology IP,” he said.

Before founding Tomorrow Never Knows, Brown co-founded Discovery VR, which has become one of the most ambitious network-backed VR outlets. Also hailing from Discovery VR is Tomorrow Never Knows co-founder Tom Lofthouse. They are joined by Gabo Arora, whose previous work as the founder of Lightshed included VR documentaries like “Clouds Over Sidra” and “Waves of Grace,” as well as Oculus Story Studio co-founder Sachka Unseld, the director of the Emmy Award-winning VR animation short “Henry” and the Emmy-nominated VR film “Dear Angelica.”

Read more

Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotized. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimizing traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital.” Social robots in many guises – from police officers to restaurant waiters – are appearing in urban public and commercial spaces.


Tokyo, Singapore and Dubai are becoming prototype ‘robot cities,’ as governments start to see automation as the key to urban living.

Read more

In a talk given today at the American Association for Cancer Research’s annual meeting, Google researchers described a prototype of an augmented reality microscope that could be used to help physicians diagnose patients. When pathologists are analyzing biological tissue to see if there are signs of cancer — and if so, how much and what kind — the process can be quite time-consuming. And it’s a practice that Google thinks could benefit from deep learning tools. But in many places, adopting AI technology isn’t feasible. The company, however, believes this microscope could allow groups with limited funds, such as small labs and clinics, or developing countries to benefit from these tools in a simple, easy-to-use manner. Google says the scope could “possibly help accelerate and democratize the adoption of deep learning tools for pathologists around the world.”

The microscope is an ordinary light microscope, the kind used by pathologists worldwide. Google just tweaked it a little in order to introduce AI technology and augmented reality. First, neural networks are trained to detect cancer cells in images of human tissue. Then, after a slide with human tissue is placed under the modified microscope, the same image a person sees through the scope’s eyepieces is fed into a computer. AI algorithms then detect cancer cells in the tissue, which the system then outlines in the image seen through the eyepieces (see image above). It’s all done in real time and works quickly enough that it’s still effective when a pathologist moves a slide to look at a new section of tissue.

Read more

Army researchers have developed an artificial intelligence and machine learning technique that produces a visible face image from a thermal image of a person’s face captured in low-light or nighttime conditions. This development could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.

Thermal cameras like FLIR, or Forward Looking Infrared, sensors are actively deployed on aerial and ground vehicles, in watch towers and at check points for surveillance purposes. More recently, thermal cameras are becoming available for use as body-worn cameras. The ability to perform automatic face recognition at nighttime using such thermal cameras is beneficial for informing a Soldier that an individual is someone of interest, like someone who may be on a watch list.

The motivations for this technology—developed by Drs. Benjamin S. Riggan, Nathaniel J. Short and Shuowen “Sean” Hu, from the U.S. Army Research Laboratory—are to enhance both automatic and human-matching capabilities.

Read more

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence othe…r abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.


For more than two millennia, Western thinkers have separated emotion from cognition – emotion being the poorer sibling of the two. Cognition helps to explain the nature of space-time and sends humans to the Moon. Emotion might save the lioness in the savannah, but it also makes humans act irrationally with disconcerting frequency.

In the quest to create intelligent robots, designers tend to focus on purely rational, cognitive capacities. It’s tempting to disregard emotion entirely, or include only as much as necessary. But without emotion to help determine the personal significance of objects and actions, I doubt that true intelligence can exist – not the kind that beats human opponents at chess or the game of Go, but the sort of smarts that we humans recognise as such. Although we can refer to certain behaviours as either ‘emotional’ or ‘cognitive’, this is really a linguistic short-cut. The two can’t be teased apart.

What counts as sophisticated, intelligent behaviour in the first place? Consider a crew of robots on a mission to Mars. To act intelligently, the robots can’t just scuttle about taking pictures of the environment and collecting dirt and mineral samples. They’d need to be able to figure out how to reach a target destination, and come up with alternative tactics if the most direct path is blocked. If pressed for time, the team of robots would have to know which materials are more important and to be prioritised as part of the expedition.

My article for the Cato Institute via Cato Unbound is out. Cato is one of the leading think tanks in the world, so I’m excited they are covering transhumanism:


Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

Read more