Toggle light / dark theme

50K WordPress sites exposed to RCE attacks by critical bug in backup plugin

A critical severity vulnerability in a WordPress plugin with more than 90,000 installs can let attackers gain remote code execution to fully compromise vulnerable websites.

Known as Backup Migration, the plugin helps admins automate site backups to local storage or a Google Drive account.

The security bug (tracked as CVE-2023–6553 and rated with a 9.8÷10 severity score) was discovered by a team of bug hunters known as Nex Team, who reported it to WordPress security firm Wordfence under a recently launched bug bounty program.

What To Know About Where ChatGPT Is Going In 2024

It’s hard to believe, but generative AI — the seemingly ubiquitous technology behind ChatGPT — was launched just one year ago, in late November 2022.


Still, as technologists discover more and more use cases for saving time and money in the enterprise, schools, and businesses the world over are struggling to find the technology’s rightful balance in the “real world.”

As the year has progressed, the rapid onset and proliferation has led to not only rapid innovation and competitive leapfrogging, but a continued wave of moral and ethical debates and has even led to early regulation and executive orders on the implementation of AI around the world as well as global alliances — like the recent Meta + IBM AI Alliance — to try and develop open frameworks and greater standards in the implementation of safe and economically sustainable AI.

Nevertheless, a transformative year with almost daily shifts in this exciting technology story. The following is a brief history of the year in generative AI, and what it means for us moving forward.

Phi-2: The surprising power of small language models

Microsoft research releases Phi-2 and promptbase.

Phi-2 outperforms other existing small language models, yet it’s small enough to run on a laptop or mobile device.


Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1 (opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5 (opens in new tab), with performance comparable to models 5x larger.

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 (opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models.

/* */