Toggle light / dark theme

When’s the last time you chirped, “Hey Google” (or Siri for that matter), and asked your phone for a recommendation for good sushi in the area, or perhaps asked what time sunset would be? Most folks these days perform these tasks on a regular basis on their phones, but you may not have realized there were multiple AI (Artificial Intelligence) engines involved in quickly delivering the results for your request.

In these examples, AI neural network models were used to process natural language recognition, and then also inferred what you were looking for, to deliver relevant search results from internet databases around the globe, but also targeting the most appropriate results based on your location and a number of other factors as well. These are just a couple of examples but, in short, AI or machine learning processing is a big requirement of smartphone experiences these days, from recommendation engines to translation, computational photography and more.

As such, benchmarking tools are now becoming more prevalent, in an effort to measure mobile platform performance. MLPerf is one such tool that nicely covers the gamut of AI workloads, and today Qualcomm is highlighting some fairly impressive results in a recent major update to the MLCommons database. MLCommons is an open consortium comprised of various chip manufacturers and OEMs with founding members like Intel, NVIDIA, Arm, AMD, Google, Qualcomm and many others. The consortium’s MLPerf benchmark measures AI workloads like image classification, natural language processing and object detection. And today Qualcomm has tabulated benchmark results from its Snapdragon 888+ Mobile Platform (a slightly goosed-up version of its Snapdragon 888) versus a myriad of competitive mobile chipsets from Samsung, MediaTek and even and Intel’s 11th Gen Core series laptop chips.

We are living in a time when we can see what needs to be done, but the industrial legacy of the last century has such power invested, politically and in the media, and so much money at its disposal due to the investors who have too much to lose to walk away, and so they throw good money after bad to desperately try to save their stranded assets.

Well, the next decade will bring new technologies which will rupture the business models of the old guard, tipping the balance on their huge economies of scale, which will quickly disintegrate their advantage before consigning them to history, and these new ways of doing things will be better for us and the environment, and cheaper than every before. Just look at how the internet and the smart phone destroyed everything from cameras to video shops to taxis and the very high street itself.

The rest is not far behind and it all holds the opportunity to mend the damage we have done.

If you want to know more about what lies ahead, check out this video.


It might indeed sound more like science fiction but we are approaching an era where everything will be fundamentally disrupted. From the energy that fuels our modern lifestyles, to the food on our plates, from transportation to medicine to production, the changes that the smartphone forced upon everything they touched, from phones to video cameras to personal music players and information portal, well that is set to happen to everything else. And if you want to know more about how autonomous vehicles could change the world, check this out. https://youtu.be/uFRSf_vD-nw

AI startups can rake in investment by hiding how their systems are powered by humans. But such secrecy can be exploitative.

The nifty app CamFind has come a long way with its artificial intelligence. It uses image recognition to identify an object when you point your smartphone camera at it. But back in 2015 its algorithms were less advanced: The app mostly used contract workers in the Philippines to quickly type what they saw through a user’s phone camera, CamFind’s co-founder confirmed to me recently. You wouldn’t have guessed that from a press release it put out that year which touted industry-leading “deep learning technology,” but didn’t mention any human labelers.

The practice of hiding human input in AI systems still remains an open secret among those who work in machine learning and AI. A 2019 analysis of tech startups in Europe by London-based MMC Ventures even found that 40% of purported AI startups showed no evidence of actually using artificial intelligence in their products.

Have you ever wondered how much water is needed to charge an iPhone? Probably not, because it takes electricity to charge a phone, not water. But, say if you had a hydraulic generator, you could be able to generate some electricity using only your garden hose. That is precisely what is being done in a video by the YouTube channel The Action Lab

The owner of the channel, James Orgill, demonstrates the power output of his setup, and how the voltage output goes up as he increases the water flow. The power that comes straight out of the generator is AC power, so he connects a full bridge rectifier to the output to convert it to DC. He makes sure the generated voltage is 12V at maximum by adjusting the flow, to prevent the iPhone from frying.

But, if you ever decide to do this at home, you should probably buy a voltage regulator, just to be safe. He then proceeds to charge his phone to figure out how much water would it take to fully charge his phone, and calculates that he would need 528 gallons (2,400 liters) of it! If you want to watch the demonstration, make sure you watch the video above.

Not all who wander are lost – but sometimes their cell phone reception is. That might change soon if a plan to project basic cell phone coverage to all parts of the globe comes to fruition. Lynk has already proven it can use a typical smartphone to bound a standard SMS text message off a low-earth-orbiting satellite, and they don’t plan to stop there.

Formerly known as Ubiquitilink, Lynk was founded a few years ago by Nanoracks founder Charles Miller and his partners but came out of “stealth mode” as a start-up in 2019. In 2020 they then used a satellite to send an SMS message from a typical smartphone, without requiring the fancy GPS locators and antennas needed by other, specially made satellite phones.

The company continued its success recently by demonstrating a “two-way” link this week using a newly launched satellite, its fifth, called “Shannon.” They’ve also proved it over multiple phones in numerous areas, including the UK, America, and the Bahamas.

As 5G is deployed in the next several years, engineers and policymakers must start thinking about a 6G in the decade ahead. For example, the Center for Tech Diplomacy at Purdue is launching a task force on “Roadmap to 6G” in October, with participation from Cisco, Dell, Ericsson, Intel, Nokia, Qualcomm, and other partners. We don’t know exactly how 6G will turn out, but we get to shape it today.

With the current speed of 5G phones not quite as advertised — and it will take some time to get even close — some may wonder why we need to already think about the next generation.

But first, Facebook is going to have to bridge the territory of privacy — not just for those who might have photos taken of them, but for the wearers of these microphone and camera-equipped glasses. VR headsets are one thing (and they come off your face after a session). Glasses you wear around every day are the start of Facebook’s much larger ambition to be an always-connected maker of wearables, and that’s a lot harder for most people to get comfortable with.

Walking down my quiet suburban street, I’m looking up at the sky. Recording the sky. Around my ears, I hear ABBA’s new song, I Still Have Faith In You. It’s a melancholic end to the summer. I’m taking my new Ray Ban smart glasses for a walk.

The Ray-Ban Stories feel like a conservative start. They lack some features that have been in similar products already. The glasses, which act as earbud-free headphones, don’t have 3D spatial audio like the Bose Frames and Apple’s AirPods Pro do. The stereo cameras, on either side of the lenses, don’t work with AR effects, either. Facebook has a few sort-of-AR tricks in a brand-new companion app called View that pairs with these glasses on your phone, but they’re mostly ways of using depth data for a few quick social effects.