Apr 21, 2024
US seeks alliance with Abu Dhabi on artificial intelligence
Posted by Gemechu Taye in category: robotics/AI
Biden administration brokers talks between Microsoft, OpenAI, Google and the UAE, as Washington seeks edge over China.
Biden administration brokers talks between Microsoft, OpenAI, Google and the UAE, as Washington seeks edge over China.
Japan is seeing if artificial intelligence can tackle its increasing shortfall of workers.
LLMs forget. Everyone knows that. The primary culprit behind this is the finity of context length of the models. Some even say that it is the biggest bottleneck when it comes to achieving AGI.
Soon, it appears that the debate over which model boasts the largest context length will become irrelevant. Microsoft, Google, and Meta, have all been taking strides in this direction – making context length infinite.
While all LLMs are currently running on Transformers, it might soon become a thing of the past. For example, Meta has introduced MEGALODON, a neural architecture designed for efficient sequence modelling with unlimited context length.
“I’m starting to see these companies and startups that are, ‘How do you optimize your cloud, and how do you manage your cloud?’ There’s a lot of people focused on questions like, ‘You’ve got a lot of data, can I store it better for you?’ Or, ‘You’ve got a lot of new applications; can I help you monitor them better?’ Because all the tools you used to have don’t work anymore,” he said. Maybe the age of digital transformation is over, he said, and we’re now in the age of cloud optimization.
United itself has bet heavily on the cloud, specifically AWS as its preferred cloud provider. Unsurprisingly, United, too, is looking at how the company can optimize its cloud usage, from both a cost and reliability perspective. Like for so many companies that are going through this process, that also means looking at developer productivity and adding automation and DevOps practices into the mix. “We’re there. We have an established presence [in the cloud], but now we’re kind of in the market to try to continue to optimize as well,” Birnbaum said.
But that also comes back to reliability. Like all airlines, United still operates a lot of legacy systems — and they still work. “Frankly, we are extra careful as we move through this journey, to make sure we don’t disrupt the operation or create self-inflicted wounds,” he said.
In this edition of TechCrunch’s Week in Review (WiR) newsletter, we cover Boston Dynamics’ new humanoid robot and more.
Concern about a generative AI bubble is growing. To defend against disillusionment, measure its concrete value.
FineWeb: 15 trillion tokens of high quality web data the web has to offer.
The 🍷 dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl.
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
The LASSIE project is preparing for a time when people and robots explore space together.
Learn more about how the #space economy can improve life on #Earth from our new insight report, ‘Space: The $1.8 Trillion Opportunity for Global Economic Growth’:
Space is approaching a new frontier. The space economy is expected to be worth $1.8 trillion by 2035 as satellite and rocket-enabled technologies become increasingly prevalent, according to a new report.
Continue reading “Space is booming. Here’s how to embrace the $1.8 trillion opportunity” »
Everyone is in a big hurry to get the latest and greatest GPU accelerators to build generative AI platforms. Those who can’t get GPUs, or have custom devices that are better suited to their workloads than GPUs, deploy other kinds of accelerators.
The companies designing these AI compute engines have two things in common. First, they are all using Taiwan Semiconductor Manufacturing Co as their chip etching foundry, and many are using TSMC as their socket packager. And second, they have not lost their minds. With the devices launched so far this year, AI compute engine designers are hanging back a bit rather than try to be on the bleeding edge of process and packaging technology so they can make a little money on products and processes that were very expensive to develop.
Nothing shows this better than the fact that the future “Blackwell” B100 and B200 GPU accelerators from Nvidia, which are not even going to start shipping until later this year, are based on the N4P process at Taiwan Semiconductor Manufacturing Co. This is a refined variant of the N4 process that the prior generation of “Hopper” H100 and H200 GPUs used, also a 4 nanometer product.