Toggle light / dark theme

Get your helmet on and be ready for the fallout from an emerging battle royale in AI. Here’s the deal. In one corner stands Microsoft with their business partner OpenAI and ChatGPT. Leering anxiously in the other corner is Google, which has announced that they will be making available a similar type of AI, based on their long-standing insider AI app known as Lambda sounds kind of techie, which is a stark contrast to “ChatGPT” (seems kind of light and airy). Google, perhaps realizing that a name embellishment was needed, has opted to put forth its variant of Lambda and anointed it with a new name “Bard”.

I’ll say more about Bard in a moment, hang in there.


Google has announced they will be releasing a generative AI app called Bard, based on their Lambda AI app. Microsoft is going to incorporate OpenAI ChatGPT into Bing. The AI wars are getting avidly underway. Here’s the scoop.

In case you haven’t heard, artificial intelligence is the hot new thing. Generative AI seems to be on the lips of every venture capitalist, entrepreneur, Fortune 500 CEO and journalist these days, from Silicon Valley to Davos.

To those who started paying real attention to AI in 2022, it may seem that technologies like ChatGPT and Stable Diffusion came out of nowhere to take the world by storm. They didn’t.


Since at least the release of GPT-2 in 2019, it has been clear to those working in the field that generative language models were poised to unleash vast economic and societal transformation. Similarly, while text-to-image models only captured the public’s attention last summer, the technology’s ascendance has appeared inevitable since OpenAI released the original DALL-E in January 2021. (We wrote an article making this argument days after the release of the original DALL-E.)

By this same token, it is important to remember that the current state of the art in AI is far from an end state for AI’s capabilities. On the contrary, the frontiers of artificial intelligence have never advanced more rapidly than they are right now. As amazing as ChatGPT seems to us at the moment, it is a mere stepping stone to what comes next.

What will the next generation of large language models (LLMs) look like? The answer to this question is already out there, under development at AI startups and research groups at this very moment.

In every downturn, we tend to measure the pain by counting layoffs. (Dell is the latest, announcing it will cut 6,650 jobs or 5% of its workforce.) According to Layoffs.fyi, a smart if incomplete tracker of job cuts, tech companies laid off almost 95,000 workers in the first five weeks of this year, which is already about 60% of the layoffs it reported for all of 2022.

While job cuts are normal, there’s something different about this economic dip. To start, as Jena McGregor reports, the advent of remote work has cemented the digital pink slip.


We may measure the pain by counting layoffs but how we hire determines the recovery — and generative AI like Chat GPT changes the game.

In the ever-evolving landscape of test preparation, a new player has sprouted on the scene – artificial intelligence.

At the forefront of this movement is a Korean start-up, Riiid, founded by YJ Jang, a graduate of the Haas School of Business at the University of California, Berkeley. Riiid has already made a name for itself in the Asian test-prep market for the TOEIC, a measure of English proficiency in the business world. Now, the company has set its sights on the American market with an SAT and ACT prep system called R.Test.

A.I. technology, with its mimicry of the networks of neurons in the human brain, has the potential to revolutionize the way educators approach their craft.

Though Bill Gates recently declared the current moment in technology as important as the advent of the PC, storied Silicon Valley firm Benchmark is taking a relatively cautious view of AI’s gold rush for now. A “big majority” of startups pitching to partner Chetan Puttagunta claim to be working on machine learning, but, he told Forbes.


Benchmark is leading the investment with a bet that MindsDB can replace the need to hire hundreds of machine learning engineers.

If your work involves analyzing and reporting on data, then it’s understandable that you might feel a bit concerned by the rapid advances being made by artificial intelligence (AI). In particular, the viral ChatGPT.


AI, particularly ChatGPT, has raised job security concerns among data analysts. Here we look at the potential impact and discuss that despite limitations like frequent mistakes and limited data upload capabilities, ChatGPT has the potential to automate data gathering and analysis tasks in the future.

After six years of peace, the two tech giants are on course to butt heads again over the future of artificial intelligence.

Microsoft is about to go head-to-head with Google in a battle for the future of search. At a press event later today, Microsoft is widely expected to detail plans to bring OpenAI’s ChatGPT chatbot to its Bing search engine. Google has already tried to preempt the news, making a rushed announcement yesterday to introduce Bard, its rival to ChatGPT, and promising more details on its AI future in a press event on Wednesday.


The two tech giants are on course to butt heads again.

Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”

At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of Nature) have said that scientists should disclose the use of LLMs in research papers (see also Nature 613, 612; 2023); and teachers have said they expect similar behaviour from their students. The journal Science has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper5.

One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.

Diffusion models have recently produced outstanding results on various generating tasks, including the creation of images, 3D point clouds, and molecular conformers. Ito stochastic differential equations (SDE) are a unified framework that can incorporate these models. The models acquire knowledge of time-dependent score fields through score-matching, which later directs the reverse SDE during generative sampling. Variance-exploding (VE) and variance-preserving (VP) SDE are common diffusion models. EDM offers the finest performance to date by expanding on these compositions. The existing training method for diffusion models can still be enhanced, despite achieving outstanding empirical results.

The Stable Target Field (STF) objective is a generalized variation of the denoising score-matching objective. Particularly, the high volatility of the denoising score matching (DSM) objective’s training targets can result in subpar performance. They divide the score field into three regimes to comprehend the cause of this volatility better. According to their investigation, the phenomenon mostly occurs in the intermediate regime, defined by various modes or data points having a similar impact on the scores. In other words, under this regime, it is still being determined where the noisy samples produced throughout the forward process originated. Figure 1(a) illustrates the differences between the DSM and their proposed STF objectives.

Figure 1: Examples of the DSM objective’s and our suggested STF objective’s contrasts.