Toggle light / dark theme

AutoCrawler.

A progressive understanding web agent for web crawler generation.

Web automation is a significant technique that accomplishes complicated web tasks by automating common web actions, enhancing operational efficiency, and reducing the need for…


Join the discussion on this paper page.

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

In the spotlight today: Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.

Korhonen previously served as a fellow at the Alan Turing Institute and she has a PhD in computer science and master’s degrees in both computer science and linguistics. She researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and creative intelligence.”

LLMs forget. Everyone knows that. The primary culprit behind this is the finity of context length of the models. Some even say that it is the biggest bottleneck when it comes to achieving AGI.

Soon, it appears that the debate over which model boasts the largest context length will become irrelevant. Microsoft, Google, and Meta, have all been taking strides in this direction – making context length infinite.

While all LLMs are currently running on Transformers, it might soon become a thing of the past. For example, Meta has introduced MEGALODON, a neural architecture designed for efficient sequence modelling with unlimited context length.

“I’m starting to see these companies and startups that are, ‘How do you optimize your cloud, and how do you manage your cloud?’ There’s a lot of people focused on questions like, ‘You’ve got a lot of data, can I store it better for you?’ Or, ‘You’ve got a lot of new applications; can I help you monitor them better?’ Because all the tools you used to have don’t work anymore,” he said. Maybe the age of digital transformation is over, he said, and we’re now in the age of cloud optimization.

United itself has bet heavily on the cloud, specifically AWS as its preferred cloud provider. Unsurprisingly, United, too, is looking at how the company can optimize its cloud usage, from both a cost and reliability perspective. Like for so many companies that are going through this process, that also means looking at developer productivity and adding automation and DevOps practices into the mix. “We’re there. We have an established presence [in the cloud], but now we’re kind of in the market to try to continue to optimize as well,” Birnbaum said.

But that also comes back to reliability. Like all airlines, United still operates a lot of legacy systems — and they still work. “Frankly, we are extra careful as we move through this journey, to make sure we don’t disrupt the operation or create self-inflicted wounds,” he said.