Toggle light / dark theme

After six years of peace, the two tech giants are on course to butt heads again over the future of artificial intelligence.

Microsoft is about to go head-to-head with Google in a battle for the future of search. At a press event later today, Microsoft is widely expected to detail plans to bring OpenAI’s ChatGPT chatbot to its Bing search engine. Google has already tried to preempt the news, making a rushed announcement yesterday to introduce Bard, its rival to ChatGPT, and promising more details on its AI future in a press event on Wednesday.

The announcements put the two tech behemoths, known for their previous skirmishes with each other, on a collision course as they compete to define the next generation of search.

Both companies are chasing a revolutionary new future for search engines: one where the results look more like short, simple answers generated by AI than a collection of links and boxes to click on. Google teased its Bard chatbot yesterday, with queries that seem to be similar to OpenAI’s ChatGPT. And today, Microsoft is expected to boost its Bing search ambitions with the addition of a ChatGPT-like interface that will answer questions in a way no search engine has before.

The more humanlike answers could be revolutionary for search. ChatGPT — which is built by AI company OpenAI — brought conversational AI to the mainstream last year, and if the Bing integration works as intended, the use cases can genuinely shave hours off of research, spreadsheets, coding, and much more.

If a leak last week is accurate, Microsoft might not only be close to demonstrating ChatGPT inside Bing but also close to making it available publicly for Bing users to test. It’s an ambitious move that, if executed well, could put some serious pressure on Google after years of search dominance. While Microsoft’s rapid commercialization of OpenAI models will unnerve bitter rival Google, just how powerful Bing’s chat functionality is will be top of mind. Despite Google flexing its AI muscles for years, nothing has wowed the web quite like ChatGPT — even if it’s not perfect.

Microsoft might have an edge on ChatGPT as we know it today. While ChatGPT is based on GPT-3.5, a large language model released last year, Bing’s chat functionality is rumored to be based on the as-yet-unannounced GPT-4 model. The AI community continues to collectively speculate about exactly how powerful GPT-4 will be, with several entertaining guesses over the model’s number of parameters that have turned into memes.

Amazon is engaging with Times Internet to explore the acquisition of MX Player, one of the largest on-demand video streaming services in India, according to four sources familiar with the matter, as the American e-commerce group eyes expanding its entertainment ambitions in the key overseas market.

The deliberations are ongoing and may not materialize into a deal, three sources cautioned. The terms of the deal are yet to be finalized. Times Internet and Amazon did not immediately respond to a request for comment. Sources requested anonymity discussing private matters.

At least two more players — including Zee-Sony — have expressed interest in acquiring the Times Internet-owned app, two sources said.

A team of researchers have come up with a machine learning-assisted way to detect the position of shapes including the poses of humans to an astonishing degree — using only WiFi signals.

In a yet-to-be-peer-reviewed paper, first spotted by Vice, researchers at Carnegie Mellon University came up with a deep learning method of mapping the position of multiple human subjects by analyzing the phase and amplitude of WiFi signals, and processing them using computer vision algorithms.

“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input,” the team concluded in their paper.

Game on!


Google PR:

Introducing Bard.
It’s a really exciting time to be working on these technologies as we translate deep research and breakthroughs into products that truly help people. That’s the journey we’ve been on with large language models. Two years ago we unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications (or LaMDA for short).

We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.

In my chat with Serene, an internet freedom activist and former Google Ideas engineer, I ask: “Am I allowed to speak with you right now? Legally?”“We’re both in the U.S., so yes, I think we’re good,” she answers.

As one of the few tools for accessing blocked and censored information on the web, Serene’s Snowflake is widely used by citizens of oppressive regimes. It is primarily done using Tor, an open-source browser that enables secure, private, and anonymous internet browsing.


She is now unveiling Snowstorm, an upgraded version of Snowflake, which Serene claims will be faster, more generalized, and have more features. Snowstorm is fast enough to stream YouTube videos, something previous versions could not do.

The software has been rewritten and reimagined using Rust, and a system-wide client, which demonstrates the software is not Tor-based. As a result, users will have more choice and agency.

Furthermore, Snowstorm has established its own company that will maintain the code and support a full-time team of core developers.

The ERNIE bot could be China’s most notable entry in the race to create lifelike AI bots.

One of the world’s biggest AI (artificial intelligence) and internet firms, Baidu, had its shares skyrocket more than 14 percent on Tuesday after the Beijing-based search engine titan announced it would launch its own ChatGPT-style service.

What is ERNIE?


Getty Images.

The company’s “ERNIE Bot” AI chatbot, also known as “Wenxin Yiyan” in Chinese, will complete internal testing and launch in March. Much excitement has developed towards what may be China’s most notable entry in the race to create lifelike AI bots.

Google announced today that it will enable a new SafeSearch blurring setting by default for all users in the coming months. The filter is designed to help people protect themselves and their families from inadvertently encountering explicit imagery on Search. The search giant says it’s announcing the feature today to mark Safer Internet Day.

The setting will soon be the new default for people who don’t already have the SafeSearch filter turned on. As a result, Google will blur explicit imagery if it appears in Search results. Explicit results include sexually explicit content like pornography, violence and gore. Google notes that users have the option to adjust the setting at any time. Prior to this expansion, the filter was already on by default for signed-in users under 18.

Once the setting becomes the default, Google will notify you that it has turned on SafeSearch blurring. If you come across an explicit image, you can choose to see it by clicking on the “view image” button. Or, you can select the “manage setting” button to adjust the filter or turn it off altogether. For instance, you can choose the “filter” option, which helps filter explicit images, text and links. Or, you can select the “off” option, which means that you will see all of the relevant results for your query, even if they’re explicit.

A 3D mesh is a three-dimensional object representation made of different vertices and polygons. These representations can be very useful for numerous technological applications, including computer vision, virtual reality (VR) and augmented reality (AR) systems.

Researchers at Florida State University and Rutgers University have recently developed Wi-Mesh, a system that can create reliable 3D human meshes, representations of humans that can then be used by different computational models and applications. Their system was presented at the Twentieth ACM Conference on Embedded Networked Sensor Systems (ACM SenSys ‘22), a conference focusing on computer science research.

“Our research group specializes in cutting-edge wi-fi sensing research,” Professor Jie Yang at Florida State University, one of the researchers who carried out the study, told Tech Xplore. “In previous work, we have developed systems that use to sense a range of human activities and objects, including large-scale human body movements, small-scale finger movements, sleep monitoring, and daily objects. Our E-eyes and WiFinger systems were among the first to use wi-fi sensing to classify various types of daily activities and finger gestures, with a focus on predefined activities using a trained model.”

With the heat of ChatGPT, the fastest growing app in the history of the web, no wonder Sundar Pichai, CEO of Google feels the need to enter with a challenge.

Google plans to release its most powerful and latest language model, LaMDA, as a companion to its search engine in weeks or in months. It will be interesting to see the trajectory comparatives between the two emerging Chat Titans.

Although Google’s forth quarter earnings call was done this week, Pichai said, “AI is the most profound technology we are working on today.” This is a pre-cursor announcement which will come shortly due to the momentum of OpenView’s GPT3.