A Deep Dive Into How Many GPUs It Takes to Run ChatGPT

Written by techtweeter | Published 2023/01/18
Tech Story Tags: artificial-intelligence | ai | ai-applications | chatgpt | tech-twitter-thread | gpu | openai | open-ai

TLDRTom Goldstein goes over how many GPUs it will take to run ChatGPT.via the TL;DR App

This Twitter Thread is by Tom Goldstein @tomgoldsteincs (source: 12-06-22). Goldstein is an Associate Professor at the University of Maryland.

How many GPUs does it take to run ChatGPT? And how expensive is it for OpenAI? Let’s find out! 🧵🤑

We don’t know the exact architecture of ChatGPT, but OpenAI has said that it is fine-tuned from a variant of GPT-3.5, so it probably has 175B parameters. That's pretty big.

How fast could it run? A 3-billion parameter model can generate a token in about 6ms on an A100 GPU (using half precision+tensorRT+activation caching). If we scale that up to the size of ChatGPT, it should take 350ms secs for an A100 GPU to print out a single word.

Of course, you could never fit ChatGPT on a single GPU. You would need 5 80Gb A100 GPUs just to load the model and text. ChatGPT cranks out about 15-20 words per second. If it uses A100s, that could be done on an 8-GPU server (a likely choice on Azure cloud).

So what would this cost to host? On Azure cloud, each A100 card costs about $3 an hour. That's $0.0003 per word generated.

But it generates a lot of words! The model usually responds to my queries with ~30 words, which adds up to about 1 cent per query.

ChatGPT acquired 1M users within its first 5 days of operation. If an average user has made 10 queries per day, I think it’s reasonable to estimate that ChatGPT serves ~10M queries per day.

https://twitter.com/sama/status/1599668808285028353?embedable=true

I estimate the cost of running ChatGPT is $100K per day, or $3M per month. This is a back-of-the-envelope calculation. I assume nodes are always in use with a batch size of 1. In reality they probably batch during high volume, but have GPUs sitting fallow during low volume.

The real costs for a typical organization would almost certainly be higher than this because parallelization is not 100% efficient, GPUs are not 100% utilized, and my runtime estimate is optimistic.

The cost to OpenAI may be lower though, because of its partnership with Microsoft.

Either way, that ain't cheap. Some say it's wasteful to pour these kinds of resources (and carbon) into a demo. But hey, it's not the worst use of Elon's money that we've seen of late 💸💸

Thanks to NLP gurus @jwkirchenbauer and @jonasgeiping

for their inputs on this thread.

Feature image generated via HackerNoon Stable Diffusion Prompt of ‘How Many GPUs Does It Take to Run ChatGPT?’


Written by techtweeter | on twitter tag "@hackernoon #techstory" to create a hackernoon story draft.
Published by HackerNoon on 2023/01/18