Toggle light / dark theme

DeepSeek-R1 on Azure Container Apps Serverless GPUs

The world of AI is evolving at a breakneck pace with new models constantly being created. With so much rapid innovation, it is essential to have the flexibility to quickly adapt applications to the latest models. This is where Azure Container Apps serverless GPUs come in.

Azure Container Apps is a managed serverless container platform that enables you to deploy and run containerized applications while reducing infrastructure management and saving costs.

With serverless GPU support, you get the flexibility to bring any containerized workload, including new language models, and deploy them to a platform that automatically scales with your customer demand. In addition, you get optimized cold start, per-second billing and reduced operational overhead to allow you to focus on the core components of your applications when using GPUs. All the while, you can run your AI applications alongside your non-AI apps on the same platform, within the same environment, which shares networking, observability, and security capabilities.

OpenAI says it has proof DeepSeek used its technology to develop its AI model

OpenAI, the company behind ChatGPT, says it has proof that the Chinese start-up DeepSeek used its technology to create a competing artificial intelligence model — fueling concerns about intellectual property theft in the fast-growing industry.

OpenAI believes DeepSeek, which was founded by math whiz Liang Wenfeng, used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.

While this is common in AI development, OpenAI says DeepSeek may have broken its rules by using the technique to create its own AI system.

Hold off on your panic — until AI passes this test

While DeepSeek makes AI cheaper, seemingly without cutting corners on quality, a group is trying to figure out how to make tests for AI models that are hard enough. It’s ‘Humanity’s Last Exam’

If you’re looking for a new reason to be nervous about artificial intelligence, try this: Some of the smartest humans in the world are struggling to create tests that AI systems can’t pass.

For years, AI systems were measured by giving new models a variety of standardized benchmark tests. Many of these tests consisted of challenging, SAT-calibre problems in areas like math, science and logic. Comparing the models’ scores over time served as a rough measure of AI progress.

AI-driven multi-modal framework improves protein editing for science and medicine

Researchers from Zhejiang University and HKUST (Guangzhou) have developed a cutting-edge AI model, ProtET, that leverages multi-modal learning to enable controllable protein editing through text-based instructions. This innovative approach, published in Health Data Science, bridges the gap between biological language and protein sequence manipulation, enhancing functional protein design across domains like enzyme activity, stability, and antibody binding.

Proteins are the cornerstone of biological functions, and their precise modification holds immense potential for medical therapies, , and biotechnology. While traditional protein editing methods rely on labor-intensive laboratory experiments and single-task optimization models, ProtET introduces a transformer-structured encoder architecture and a hierarchical training paradigm. This model aligns protein sequences with natural language descriptions using contrastive learning, enabling intuitive, text-guided protein modifications.

The research team, led by Mingze Yin from Zhejiang University and Jintai Chen from HKUST (Guangzhou), trained ProtET on a dataset of over 67 million protein–biotext pairs, extracted from Swiss-Prot and TrEMBL databases. The model demonstrated exceptional performance across key benchmarks, improving protein stability by up to 16.9% and optimizing catalytic activities and antibody-specific binding.

AI model learns generalized ‘language’ of regulatory genomics, predicts cellular stories

A team of investigators from Dana-Farber Cancer Institute, The Broad Institute of MIT and Harvard, Google, and Columbia University have created an artificial intelligence model that can predict which genes are expressed in any type of human cell. The model, called EpiBERT, was inspired by BERT, a deep learning model designed to understand and generate human-like language.

The work appears in Cell Genomics.

Every cell in the body has the same , so the difference between two types of cells is not the genes in the genome, but which genes are turned on, when, and how many. Approximately 20% of the genome codes for determine which genes are turned on, but very little is known about where those codes are in the genome, what their instructions look like, or how mutations affect function in a cell.