Toggle light / dark theme

Still not sold on the whole robotics at this point; still not at the level where it needs from a multi-functional capability state plus still too jerky and most are more like a CPU on wheels.


TORONTO, ON –(Marketwired — February 23, 2016) — ­­ Astro Boy may be a fictional character, but Pepper the Robot is its real-­world incarnation. Pepper –­­ the world’s first humanoid robot –­­ will join exhibitors like MasterCard, Fluid, Vizera and Eyris, as they interact with industry experts as part of The Retail Collective Lab, sponsored by MasterCard, at this year’s Dx3 Trade Show and Conference.

Read more

Sometimes, it seems like the tech world is inexorably bending towards a future full of curved devices. At MWC in Barcelona, we saw yet another prototype display, this time from English firm FlexEnable. Now, this isn’t a working device of any kind — it’s essentially just a screen running a demo — and neither is FlexEnable a consumer electronics company. But the firm says its technology is ready to go, and it’s apparently in talks with unnamed hardware partners who want to make this sort of device a reality. How long until we see fully-fledged wristbands like this on the market? Eighteen months is the optimistic guess from FlexEnable’s Paul Cain.

The prototype uses plastic transistors to achieve its flexibility, creating what the company calls OLCD (organic liquid crystal display) screens. FlexEnable says these can achieve the same resolutions as regular LCD using the same amount of power, but, of course, they have that added flexibility. These transistors can be wrapped around pretty much anything, and also have uses outside of display technology. FlexEnable was also showing off thin flexible fingerprint sensors, suggesting they could be wrapped around a door handle to add security without it being inconvenient to the user.

The prototype we saw at MWC was encased in a stiff metal frame, like a lot of flexible displays, and although OLCD can flex a little, it’s not the sort of material you can endlessly bend and crease. That, says, Cain, will have to wait for flexible OLED displays, a technology that is going to need more development. Still, we are seeing truly flexible OLED prototypes popping up here and there, such as this device from Queen’s University, which lets you flex a screen to flick through the pages of a digital book. The future bends ever closer.

Read more

Cannot wait to hear Mckenna’s perspective on BMIs for brain connection to all things digital, and microbots used to extend life as well as bionic body parts.


Famed psychonaut Terence Mckenna envisioned a very radical approach of bridging psychedelics with virtual reality to create a supercharged version of consciousness in which language, or rather the meaning behind what we speak, could be made visual in front of our very eyes.

In Mckenna’s “cyberdelic” future of virtual reality, artists and the revival of art, would be at the forefront of innovation, according to a talk he gave to a German audience in 1991.

Does this not still ring true 25 years on? Are not computer programmers and virtual cartographers the modern artists who are pushing the boundaries of our understanding of reality and ourselves?

Read more

The job advertisement was highly specific: applicants had to be passionate about computer games and live in the UK. Oh, and they also had to be amputees who were interested in wearing a futuristic prosthetic limb.

James Young knew straight away he had a better shot than most. After losing an arm and a leg in a rail accident in 2012, the 25-year-old Londoner had taught himself to use a video-game controller with one hand and his teeth. “How many amputee gamers can there be?” he asked himself.

In the end, more than 60 people replied to the ad, which was looking for a games-mad amputee to become the recipient of a bespoke high-tech prosthetic arm inspired by Metal Gear Solid, one of the world’s best-selling computer games. Designed and built by a team of 10 experts led by London-based prosthetic sculptor Sophie de Oliveira Barata, the £60,000 carbon-fibre limb is part art project, part engineering marvel.

Read more

Whether in the brain or in code, neural networks are shaping up to be one of the most critical areas of research in both neuroscience and computer science. An increasing amount of attention, funding, and development has been pushed toward technologies that mimic the brain in both hardware and software to create more efficient, high performance systems capable of advanced, fast learning.

One aspect of all the efforts toward more scalable, efficient, and practical neural networks and deep learning frameworks we have been tracking here at The Next Platform is how such systems might be implemented in research and enterprise over the next ten years. One of the missing elements, at least based on the conversations that make their way into various pieces here, for such eventual end users is reducing the complexity of the training process for neural networks to make them more practically useful–and without all of the computational overhead and specialized systems training requires now. Crucial then, is a whittling down of how neural networks are trained and implemented. And not surprisingly, the key answers lie in the brain, and specifically, functions in the brain and how it “trains” its own network that are still not completely understood, even by top neuroscientists.

In many senses, neural networks, cognitive hardware and software, and advances in new chip architectures are shaping up to be the next important platform. But there are still some fundamental gaps in knowledge about our own brains versus what has been developed in software to mimic them that are holding research at bay. Accordingly, the Intelligence Advanced Research Projects Activity (IARPA) in the U.S. is getting behind an effort spearheaded by Tai Sing Lee, a computer science professor at Carnegie Mellon University’s Center for the Neural Basis of Cognition, and researchers at Johns Hopkins University, among others, to make new connections between the brain’s neural function and how those same processes might map to neural networks and other computational frameworks. The project called the Machine Intelligence from Cortical Networks (MICRONS).

Read more