When’s the last time you chirped, “Hey Google” (or Siri for that matter), and asked your phone for a recommendation for good sushi in the area, or perhaps asked what time sunset would be? Most folks these days perform these tasks on a regular basis on their phones, but you may not have realized there were multiple AI (Artificial Intelligence) engines involved in quickly delivering the results for your request.
In these examples, AI neural network models were used to process natural language recognition, and then also inferred what you were looking for, to deliver relevant search results from internet databases around the globe, but also targeting the most appropriate results based on your location and a number of other factors as well. These are just a couple of examples but, in short, AI or machine learning processing is a big requirement of smartphone experiences these days, from recommendation engines to translation, computational photography and more.
As such, benchmarking tools are now becoming more prevalent, in an effort to measure mobile platform performance. MLPerf is one such tool that nicely covers the gamut of AI workloads, and today Qualcomm is highlighting some fairly impressive results in a recent major update to the MLCommons database. MLCommons is an open consortium comprised of various chip manufacturers and OEMs with founding members like Intel, NVIDIA, Arm, AMD, Google, Qualcomm and many others. The consortium’s MLPerf benchmark measures AI workloads like image classification, natural language processing and object detection. And today Qualcomm has tabulated benchmark results from its Snapdragon 888+ Mobile Platform (a slightly goosed-up version of its Snapdragon 888) versus a myriad of competitive mobile chipsets from Samsung, MediaTek and even and Intel’s 11th Gen Core series laptop chips.
Comments are closed.