Toggle light / dark theme

Copilot is more than just a chatbot. Microsoft is gradually building an AI assistant that it has dreamed about for years.

Microsoft’s new AI-powered Copilot summarized my meeting instantly yesterday (the meeting was with Microsoft to discuss Copilot, of course) before listing out the questions I’d asked just seconds before. I’ve watched Microsoft demo the future of work for years with concepts about virtual assistants, but Copilot is the closest thing I’ve ever seen to them coming true.


Microsoft is in an AI race with Google for the future of work.

Shares of billionaire Robin Li’s Baidu, which tumbled 6.4% on Thursday on disappointment over the launch of its ChatGPT-like service, surged almost 14% Friday as some analysts who tried Ernie Bot gave favourable reviews.

Hong Kong-listed Baidu rose HK$17.10 to close at HK$142.20.

The source of Thursday’s market reaction was that the highly-anticipated launch of the service involved a series of pre-recorded videos instead of any real-time performance.

Built on OpenAI’s generative AI technology and one of the largest datasets comprising trillions of data points, Copilot can write emails, business proposals and meeting minutes.

On Thursday, Microsoft announced a natural language-based AI tool called Copilot that will be embedded across its Office suite of applications such as Word, Teams, Excel, Outlook and PowerPoint. The tool is currently being tested and has been rolled out to 20 select enterprise users, the company said.

Copilot combines large language models with Microsoft Graph, a dataset of human workplace activity that includes trillions of data points collected from the suite of Microsoft applications.

The new AI strategy, which includes the construction of a supercomputer, will cost the UK £900 million ($1.2 billion).

The United Kingdom (U.K.) has announced plans to develop its own ChatGPT version, “BritGPT” as part of a new artificial intelligence (AI) strategy.

“These investments will provide scientists with access to cutting-edge computing power and bring a significant uplift in computing capacity to the AI community,” reads the Spring Budget 2023 plan.


Devrimb/iStock.

The Turing Test, developed in 1950 has become quite obsolete.

Chris Saad, the former head of product development at Uber, has designed a new framework to benchmark the intelligence of artificial intelligence (AI), which is currently undergoing a sea change. The framework, based on a theory that intelligence is not a monolithic construction, was recently shared on Tech Crunch.

AI has been the trending topic for the past few months after OpenAI made public their conversational chatbot, ChatGPT. Users have tested the chatbot in many different areas varying from writing poetry to code and even sales pitches, and the bot hasn’t disappointed.

Is generative AI the beginning of the end for humans… or the end of the beginning?

And, did you know generative AI has been around since 1972?

In this TechFirst we chat with Ilke Demir, a research scientist at Intel who is working on ethical generative AI applications, like a speech synthesis project that aims to enable people who have lost their voice to talk again, an open urban driving simulator developed to support development, training, and validation of autonomous driving systems.

And a privacy-focused face generator that allows researchers to mix and match facial regions (nose of person A, mouth of person B, eyes of person C, etc.) to create an entirely new face that does not already exist in a dataset, so that people can request anonymization in public photos.

8 years of cost reduction in 5 weeks: how Stanford’s Alpaca model changes everything, including the economics of OpenAI and GPT 4. The breakthrough, using self-instruct, has big implications for Apple’s secret large language model, Baidu’s ErnieBot, Amazon’s attempts and even governmental efforts, like the newly announced BritGPT.

I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4. And what are the implications of short-circuiting human annotation like this? With analysis of a tweet by Eliezer Yudkowsky, I delve into the workings of the model and the questions it rises.

Web Demo: https://alpaca-ai0.ngrok.io/

Alpaca: https://crfm.stanford.edu/2023/03/13/alpaca.html.

To effectively tackle everyday tasks, robots should be able to detect the properties and characteristics of objects in their surroundings, so that they can grasp and manipulate them accordingly. Humans naturally achieve this using their sense of touch and roboticists have thus been trying to provide robots with similar tactile sensing capabilities.

A team of researchers at the University of Hong Kong recently developed a new soft tactile sensor that could allow robots to detect different properties of objects that they are grasping. This sensor, presented in a paper pre-published on arXiv, is made up of two layers of weaved optical fibers and a self-calibration algorithm.

“Although there exist many soft and conformable tactile sensors on robotic applications able to decouple the normal force and , the impact of the size of object in contact on the force calibration model has been commonly ignored,” Wentao Chen, Youcan Yan, and their colleagues wrote in their paper.