In contemporary artificial intelligence, transformers are everywhere, changing the way we do everything from natural language processing to computer vision. People have rushed to play with GPT-4 and other AI text models built on top of Transformer architectures because the machines are capable of solving problems they previously couldn’t, riffing stories, code or poetry, creating images from sentences, even speaking like Turing test-worthy humans. But as artificial intelligence improves, researchers have found that a grid-based or sequential approach to data has increasingly stringent constraints. And there’s a new AI technology that holds the potential of unraveling the mysteries that our complex and interconnected world holds around us: Graph Neural Networks (GNNs).
