A digestible introduction to how quantum computers work and why they are essential in evolving AI and ML systems. Gain a simple understanding of the quantum principles that power these machines.
Category: robotics/AI – Page 963
The introduction of a novel idea of quantum computing in industrial applications is the result of the slow but steady progress of computing systems and equipment. Quantum computers, which are primarily used to aid in complex computations, are anticipated to significantly progress several industries and open up new prospects.
The promotion of IBM’s supercomputers is not far behind that of other tech behemoths like Google, who claim to have a better grasp on quantum dominance. What’s more crucial, though, is that enterprises and entire sectors will undoubtedly benefit from massive automation and digital transformation thanks to the industrial applications of quantum computing development. Quantum computing in 2023 offers countless opportunities. The world will eventually learn about the actual potential of quantum computing. With each passing day, the demand for effective processing grows, and it appears that the only option is to develop quantum applications. In this article, we have enlisted the top 10 industrial applications of quantum computing.
Artificial intelligence (AI) solutions: https://ibm.biz/Solutions_with_IBM_AI
IBM Watson is AI for business: https://ibm.biz/IBM_and_Watson_AI_Are_we_there_yet.
Ask AI experts about the progress of artificial intelligence and they may say “We’re only five or ten years away.” Five or ten years later, are experts still saying the same thing? In this video with Martin Keen and Jeff Crume, they review the progress in AI and try to answer the question: Are we there yet?
Get started for free on IBM Cloud → https://ibm.biz/buildonibmcloud.
A good open world game is filled with little details that add to a player’s sense of immersion. One of the key elements is the presence of background chatter. Each piece of dialog you hear is known as a “bark” and must be individually written by the game’s creators — a time consuming, detailed task. Ubisoft, maker of popular open world gaming series like Assassin’s Creed and Watch Dogs, hopes to shorten this process with Ghostwriter, a machine learning tool that generates first drafts of barks.
To use Ghostwriter, narrative writers input the character and type of interaction they are looking to create. The tool then produces variations, each with two slightly different options, for writers to review. As the writers make edits to the drafts, Ghostwriter updates, ideally producing more tailored options moving forward.
The idea here is to save game writers time to focus on the big stuff. “Ghostwriter was created hand-in-hand with narrative teams to help them complete a repetitive task more quickly and effectively, giving them more time and freedom to work on games’ narrative, characters, and cutscenes,” Ubisoft states in a video release.
The former Microsoft boss says AI is the second revolutionary technology he’s seen in his lifetime.
Register Free for NVIDIA’s Spring GTC 2023, the #1 AI Developer Conference: https://nvda.ws/3kEyefH
RTX 4,080 Giveaway Form: https://forms.gle/ef2Kp9Ce7WK39xkz9
The talk about AI taking our programming jobs is everywhere. There are articles being written, social media going crazy, and comments on seemingly every one of my YouTube videos. And when I made my video about ChatGPT, I had two particular comments that stuck out to me. One was that someone wished I had included my opinion about AI in that video, and the other was asking if AI will make programmers obsolete in 5 years. This video is to do just that. And after learning, researching, and using many different AI tools over the last many months (a video about those tools coming soon), well let’s just say I have many thoughts on this topic. What AI can do for programmers right now. How it’s looking to progress in the near future. And will it make programmers obsolete in the next 5 years? Enjoy!!
The Sessions I Mentioned:
Fireside Chat with Ilya Sutskever and Jensen Huang: AI Today and Vision of the Future [S52092]: https://www.nvidia.com/gtc/session-catalog/?ncid=ref-inor-73…314001t6Nv.
Using AI to Accelerate Scientific Discovery [S51831]: https://www.nvidia.com/gtc/session-catalog/?ncid=ref-inor-73…197001tw0E
Generative AI Demystified [S52089]: https://www.nvidia.com/gtc/session-catalog/?ncid=ref-inor-73…393001DjiP
3D by AI: Using Generative AI and NeRFs for Building Virtual Worlds [S52163]: https://www.nvidia.com/gtc/session-catalog/?ncid=ref-inor-73…782001l1Ul.
Achieving Enterprise Transformation with AI and Automation Technologies [S52056]: https://www.nvidia.com/gtc/session-catalog/?ncid=ref-inor-73…353001hjSr.
A portion of this video is sponsored by NVIDIA.
Amazon Robotics has manufactured and deployed the world’s largest fleet of mobile industrial robots. The newest member of this robotic fleet is Proteus—Amazon’s first fully autonomous mobile robot. Amazon uses NVIDIA Isaac Sim, built on Omniverse, to create high-fidelity simulations to accelerate Proteus deployments across its fulfillment centers.
Explore NVIDIA Isaac Sim: https://developer.nvidia.com/isaac-sim.
#Simulation.
#Robotics.
#DigitalTwin
Unmanned aerial vehicles (UAVs), also known as drones, can help humans to tackle a variety of real-world problems; for instance, assisting them during military operations and search and rescue missions, delivering packages or exploring environments that are difficult to access. Conventional UAV designs, however, can have some shortcomings that limit their use in particular settings.
For instance, some UAVs might be unable to land on uneven terrains or pass through particularly narrow gaps, while others might consume too much power or only operate for short amounts of time. This makes them difficult to apply to more complex missions that require reliably moving in changing or unfavorable landscapes.
Researchers at Zhejiang University have recently developed a new unmanned, wheeled and hybrid vehicle that can both roll on the ground and fly. This unique system, introduced in a paper pre-published on arXiv, is based on a unicycle design (i.e., a cycling vehicle with a single wheel) and a rotor-assisted turning mechanism.
ChatGPT is currently deployed on A100 chips that have 80 GB of cache each. Nvidia decided this was a bit wimpy so they developed much faster H100 chips (H100 is about twice as fast as A100) that have 94 GB of cache each and then found a way to put two of them on a card with high speed connections between them for a total of 188 GB of cache per card.
So hardware is getting more and more impressive!
While this year’s Spring GTC event doesn’t feature any new GPUs or GPU architectures from NVIDIA, the company is still in the process of rolling out new products based on the Hopper and Ada Lovelace GPUs its introduced in the past year. At the high-end of the market, the company today is announcing a new H100 accelerator variant specifically aimed at large language model users: the H100 NVL.
The H100 NVL is an interesting variant on NVIDIA’s H100 PCIe card that, in a sign of the times and NVIDIA’s extensive success in the AI field, is aimed at a singular market: large language model (LLM) deployment. There are a few things that make this card atypical from NVIDIA’s usual server fare – not the least of which is that it’s 2 H100 PCIe boards that come already bridged together – but the big takeaway is the big memory capacity. The combined dual-GPU card offers 188GB of HBM3 memory – 94GB per card – offering more memory per GPU than any other NVIDIA part to date, even within the H100 family.
While virtually all of the industry is buzzing about AI, accelerated computing and AI powerhouse NVIDIA has just announced a new software library, called cuLitho, that promises an exponential acceleration in chip design development times, as well as reduced chip fab data center carbon footprint and the ability to push the boundaries of bleeding-edge semiconductor design. In fact, NVIDIA cuLitho has already been adopted by the world’s top chip foundry, TSMC, leading EDA chip design tools company Synopsys and chip manufacturing equipment maker ASML.
Industry partners like EDA design tools bellwether Synopsys are chiming in as well, with respect to the adoption of cuLitho and what it can do for their customers that may want to take advantage of the technology. “Computational lithography, specifically optical proximity correction, or OPC, is pushing the boundaries of compute workloads for the most advanced chips,” said Aart de Geus, chair and CEO of Synopsys. “By collaborating with our partner NVIDIA to run Synopsys OPC software on the cuLitho platform, we massively accelerated the performance from weeks to days! The team-up of our two leading companies continues to force amazing advances in the industry.”
As semiconductor fab process nodes get smaller, requiring finer geometry, more complex calculation and photomask patterning, offloading and accelerating these workloads with GPUs makes a lot of sense. In addition, as Moore’s Law continues to slow, cuLitho will also accelerate additional cutting-edge technologies like high NA EUV Lithography, which is expected to help print the extremely tiny and complex features of chips being fabricated at 2nm and smaller.
I personally expect cuLitho to be another inflection point for NVIDIA. If the chip fab industry shifts to this technology, the company will have a huge new revenue pipeline for its DGX H100 servers and GPU platforms, just like it did when it seeded academia with GPUs and its CUDA programming language in Johnny Appleseed fashion to accelerate AI. NVIDIA is now far and away the AI processing leader, and it could be setting itself up for similar dominance in semiconductor manufacturing infrastructure as well.