Toggle light / dark theme

IBM Launches watsonx: Paving A Path To Faster Enterprise AI Adoption

Last week, around 4,000 IBM employees, customers, and partners attended IBM Think, the company’s annual conference, to hear the latest innovations, updates, and news from IBM. This year’s event came with many announcements, but with AI in focus, its announcement of watsonx drew significant attention—with the market zeroing in on the substantial opportunities around AI.

After attending the event, and hearing from IBM executives as well as following the broad swath of recent AI and generative AI announcements, I believe that IBM’s announcement of watsonx is a significant milestone in the advancement of enterprise AI. Built on top of the Red Hat OpenShift platform, watsonx offers a full tech stack for training, deploying, and supporting AI capabilities across any cloud environment This move by IBM is indicative of the growing importance of supporting generative AI, and the potential for businesses to benefit from the ease and reliability of this technology. As I see it, this announcement is one of the more important announcements tying together much of the exciting generative AI news and analysis with the more practical connective tissues that will drive meaningful adoption in the enterprise.

Watsonx features three different components: watsonx.ai, watsonx.data, and watsonx.governance. The first component, watsonx.ai, is a design studio for base models, machine learning, and generative AI. It can be used to train, tune, and deploy AI models including IBM supplied models, open-source models, and client provided models, and is currently in preview with select IBM clients and partners, and is expected to be available to the general public in July.

Generative AI Breaks The Data Center: Data Center Infrastructure And Operating Costs Projected To Increase To Over $76 Billion By 2028

Update: The image for the ChatGPT 3.5 and vicuna-13B comparison has been updated for readability.

With the launch of Large Language Models (LLMs) for Generative Artificial Intelligence (GenAI), the world has become both enamored and concerned with the potential for AI. The ability to hold a conversation, pass a test, develop a research paper, or write software code are tremendous feats of AI, but they are only the beginning to what GenAI will be able to accomplish over the next few years. All this innovative capability comes at a high cost in terms of processing performance and power consumption. So, while the potential for AI may be limitless, physics and costs may ultimately be the boundaries.

Tirias Research forecasts that on the current course, generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028, with growth challenging the business models and profitability of emergent services such as search, content creation, and business automation incorporating GenAI. For perspective, this cost is more than twice the estimated annual operating cost of Amazon’s cloud service AWS, which today holds one third of the cloud infrastructure services market according to Tirias Research estimates. This forecast incorporates an aggressive 4X improvement in hardware compute performance, but this gain is overrun by a 50X increase in processing workloads, even with a rapid rate of innovation around inference algorithms and their efficiency. Neural Networks (NNs) designed to run at scale will be even more highly optimized and will continue to improve over time, which will increase each server’s capacity. However, this improvement is countered by increasing usage, more demanding use cases, and more sophisticated models with orders of magnitude more parameters. The cost and scale of GenAI will demand innovation in optimizing NNs and is likely to push the computational load out from data centers to client devices like PCs and smartphones.

What Are The Risks Of Google And Microsoft Advancing Their Generative AI Innovations?

Major announcements from CEO Sundar Pichai, CEO, Google at the I/O conference yesterday that generative AI will underpin their search, Gmail, and other products. Coming at the heels of major announcements from Microsoft and OpenAI’s partnership since January, 2023, Google has been scrambling to get their market and generative AI product positioning up to snuff. This announcement was applauded after the recent gaffaw in early February, when Google announced its AI chatbot Bard — a rival to OpenAI’s ChatGPT.


This blog highlights Google’s generative AI announcement against Microsoft’s OpenAI. Also key issues on data bias and impacts to society and citizen privacy caution to ensure AI legislation speeds up in 2023 to balance out the technology giants power.

Ex-Google CEO Says We Should Trust AI Industry to Self-Regulate

He might not helm Google anymore, but Eric Schmidt is absolutely still thinking like a tech CEO.

In an interview with NBC’s “Meet the Press,” Schmidt laid bare his techno-libertarian outlook when asked about whether artificial intelligence needs “guardrails” given its propensity to lie, confabulate, and, well, go kind of mad.

“When this technology becomes more broadly available, which it will, and very quickly, the problem is going to be much worse,” the former Google executive told MTP’s Jacob Ward. “I would much rather have the current companies define reasonable boundaries.”

The future of generative AI is niche, not generalized

The relentless hype surrounding generative AI in the past few months has been accompanied by equally loud anguish over the supposed perils — just look at the open letter calling for a pause in AI experiments. This tumult risks blinding us to more immediate risks — think sustainability and bias — and clouds our ability to appreciate the real value of these systems: not as generalist chatbots, but instead as a class of tools that can be applied to niche domains and offer novel ways of finding and exploring highly specific information.

This shouldn’t come as a surprise. The news that a dozen companies have developed ChatGPT plugins is a clear demonstration of the likely direction of travel. A “generalized” chatbot won’t do everything for you, but if you’re, say, Expedia, being able to offer customers a simple way to organize their travel plans is undeniably going to give you an edge in a marketplace where information discovery is so important.

Drones navigate unseen environments with liquid neural networks

In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts.

The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants.

“The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

Creating the Lab of the Future; What does it entail?

As part of our SLAS US 2023 coverage, we speak to Luigi Da Via, Team Leader in Analytical Development at GSK, about the lab of the future, and what it may look like.

Please, can you introduce yourself and tell us what inspired your career within the life sciences?

Hello, my name is Luigi Da Via, and I am currently leading the High-Throughput Automation team at GSK. I have been with the company for the past six years, and I’m thrilled to be contributing to the development of life-saving medicines through the application of cutting-edge technology and automation.

Google Quantum AI Breaks Ground: Unraveling the Mystery of Non-Abelian Anyons

Summary: For the first time, Google Quantum AI has observed the peculiar behavior of non-Abelian anyons, particles with the potential to revolutionize quantum computing by making operations more resistant to noise.

Non-Abelian anyons have the unique feature of retaining a sort of memory, allowing us to determine when they have been exchanged, even though they are identical.

The team successfully used these anyons to perform quantum computations, opening a new path towards topological quantum computation. This significant discovery could be instrumental in the future of fault-tolerant topological quantum computing.