When ChatGPT was made public in November 2022, had an impact beyond the tech industry. From helping with speech writing to computer coding, artificial intelligence (AI) suddenly loomed as a real and useful tool.
However, this would not be possible without very powerful computing hardware. AND One company in particular became the center of the AI bonanza: Nvidiabased in California.
Originally known for making the type of computer chips that process graphicsparticularly for computer games, Nvidia hardware is the foundation of most AI applications today.
“It’s the leading technology player enabling this new thing called artificial intelligence,” says Alan Priestley, a semiconductor industry analyst at Gartner. “What Nvidia is to AI is almost like what Intel was to PCs”explains Dan Hutcheson, an analyst at TechInsights.
ChatGPT was trained using 10,000 of Nvidia’s graphics processing units (GPUs).clustered on a supercomputer belonging to Microsoft.
“It is one of many supercomputers that were built with Nvidia GPUs, to a wide variety of scientific and artificial intelligence usessays Ian Buck, general manager and vice president of accelerated computing at Nvidia.
Nvidia has about 95% of the GPU market for machine learning, noted a recent report by CB Insights.
Figures show that its artificial intelligence business brought in about $15 billion in revenue last year, up 40% from the previous year and surpassing gaming as its biggest source of revenue.
Nvidia’s shares soared nearly 30% after it released first-quarter results on Wednesday. The company said it is ramping up production of its chips to meet “growing demand.”
The AI chips cost roughly $10,000 each, though its latest and most powerful version sells for much more.. How did Nvidia become a central player in the AI revolution? In short, thanks to the union between a daring bet and a good time.
Jensen Huang, now the CEO of Nvidia, was one of its founders in 1993. So, Nvidia focused on improving graphics for games and other applications.
In 1999, he developed GPUs to improve image display for computers.
GPUs excel at processing many small tasks simultaneously (for example, handling millions of pixels on a screen), a mechanism known as parallel processing.
In 2006, researchers at Stanford University discovered that GPUs had another use: They could speed up math operations in ways that regular processing chips couldn’t.
At that moment, Huang made a crucial decision for the development of AI as we know it.. He poured Nvidia’s resources into creating a tool to make GPUs programmable, thus opening up their parallel processing capabilities for uses beyond graphics.
That tool was added to Nvidia’s computer chips. For computer game users, it was a capability they didn’t need and probably didn’t even know about, but for researchers it was a new way of doing high-performance computing. That ability helped spark the first breakthroughs in modern AI.
In 2012 Alexnet was introduced, an AI that could classify images and was trained using just two of Nvidia’s programmable GPUs.. The training process took only a few days, instead of the months it could have taken with a much larger number of regular processing chips.
The discovery that GPUs could massively speed up neural network processing began to spread among computer scientists, who began buying them to run this new kind of work. “Artificial intelligence found us”says Ian Buck of Nvidia.
The company used its advantage by investing in developing new types of GPUs better suited for AI, as well as more software to make the technology easier to use.
A decade, and billions of dollars later, ChatGPT emerged, an AI that can give eerily human-like answers to questions. Artificial intelligence company Metaphysic creates videos of celebrities and others using artificial intelligence techniques. His Tom Cruise fakes created a stir in 2021.
To train and then run its models, Metaphysic uses hundreds of Nvidia GPUs, some purchased from Nvidia and others accessed through a cloud computing service.
“There are no alternatives to Nvidia to do what we do”says Tom Graham, its co-founder and CEO. “It’s way ahead of the curve.”
However, while Nvidia’s dominance seems secure for now, the long term is harder to predict. “Nvidia hit the target that everyone is trying to get,” says Kevin Krewell, another industry analyst at TIRIAS Research.
Other large semiconductor companies offer some competition. AMD and Intel are best known for making central processing units (CPUs), but they also make dedicated GPUs for AI applications (Intel recently joined the competition).
Google has its Tensor Processing Units (TPUs), which are used not only for search results but also for certain machine learning tasks.while Amazon has a custom chip for training AI models. For their part, Microsoft and Meta are developing their own AI chips.
*By Zoe Corbyn