Artificial intelligence (AI) is infamous for its resource-heavy training, but a new study may have found a solution in a novel communications system, called ZEN, that markedly improves the way large language models (LLMs) train.
The research team at Rice University was helmed by doctoral graduate Zhuang Wang and computer science professor T.S. Eugene Ng with contributions from two other computer science faculty members: assistant professor Yuke Wang and professor Anshumali Shrivastava. Stevens University’s Zhaozhuo Xu and Jingyi Xi of Zhejiang University also contributed to the project.