Toggle light / dark theme

It’s hard to believe, but generative AI — the seemingly ubiquitous technology behind ChatGPT — was launched just one year ago, in late November 2022.


Still, as technologists discover more and more use cases for saving time and money in the enterprise, schools, and businesses the world over are struggling to find the technology’s rightful balance in the “real world.”

As the year has progressed, the rapid onset and proliferation has led to not only rapid innovation and competitive leapfrogging, but a continued wave of moral and ethical debates and has even led to early regulation and executive orders on the implementation of AI around the world as well as global alliances — like the recent Meta + IBM AI Alliance — to try and develop open frameworks and greater standards in the implementation of safe and economically sustainable AI.

Nevertheless, a transformative year with almost daily shifts in this exciting technology story. The following is a brief history of the year in generative AI, and what it means for us moving forward.

Fusion-powered engines might drastically reduce travel time to the Moon and Mars.


California-based startup Helicity Space has successfully raised $5 million in a recent seed funding round.

Prominent space companies Airbus Ventures, TRE Ventures, Voyager Space Holdings, E2MC Space, Urania Ventures, and Gaingels have all invested in Helicity, according to a press release.

The start-up’s work focuses on the development of nuclear fusion propulsion technology for deep space missions. Unlike traditional chemical propulsion systems, fusion propulsion offers the potential for significantly higher energy efficiency and speed.

Microsoft research releases Phi-2 and promptbase.

Phi-2 outperforms other existing small language models, yet it’s small enough to run on a laptop or mobile device.


Over the past few months, our Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks. Our first model, the 1.3 billion parameter Phi-1 (opens in new tab), achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). We then extended our focus to common sense reasoning and language understanding and created a new 1.3 billion parameter model named Phi-1.5 (opens in new tab), with performance comparable to models 5x larger.

We are now releasing Phi-2 (opens in new tab), a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 (opens in new tab) available in the Azure AI Studio model catalog to foster research and development on language models.