Aloha.zone.io

Aloha's technical notes.

View on GitHub

Graphics Processing Unit

The infrastructure of modern technology topics (AI, Machine Learning, Blockchain, etc.)


@Author: Garfield Zhu

Story about NVIDIA

The new star in “Trillion Club”. (2023)


And... The Man


History

0. Stock - Nasdaq: NVDA

See the history of the stock Nvidia first.
🌊 The tide of cutting-edge technologies bring NVIDIA where it is today.

1. Begining

2. Developing

3. New Vision

4. AI, AR, VR, Auto-pilot, Blockchain, Cloud

5. Bitcoin, Mining age

6. AI, Big Language Model, Metaverse

7. Today


Product Series

Nvidia had many product series, now it has 3 main series:


Anecdotes

1. NVIDIA naming

Nvidia's first chip was named as `dot-NV`. "NV" is the abbreviation of NEXT, for they enjoy developing the next generation of products.

For corp name, they try to find a latin word with "NV" in it. They found "invidia" in Latin, the envy. They want to be the envy of the industry. So they remove the "i" and named the company as "Nvidia". 🤣

Actually, they did it.

2. NVIDIA vs. AMD ?
  • Jensen Huang (黄仁勋) and Lisa Su (苏姿丰) are both Americans born in Taiwan.
  • They are relatives. Su's maternal grandfather is the eldest brother of Huang's mother. Huang is the "uncle" of Su.
  • Jensen Huang
  • Lisa Su
3. The nick names
  • Jensen Huang is called "老黄" and Lisa Su is called "苏妈" in China.
  • Jensen Huang is more popular as "皮衣刀客":
  • The "皮衣":

  • The "刀法":
  • Card 3060 3060Ti 3070 3070Ti 3080 3080Ti 3090 3090Ti
    SM count 28 38 46 48 68 80 82 84
    VRAM 12GB 8GB 8GB 8GB 10GB 12GB 24GB 24GB
    Semiconductor etching is error-prone, especially for 7nm, 5nm even 3nm process. The yield rate is not high. The chips are design to be error tolerable. Each area on the chip is standalone and can be disabled via firmware if there is an error. If the 84 SM chip has 2 SMs disabled, it will be written as a 82 SM chip and wrapped as 3090. 😆
4. The most popular CEO
  • The rank voted by Silicon Valley employees in Aug. 2023
  • Huang has a tatoo of "Nvidia" logo on his shoulder. (as he promised if the stock value reach $100)
  • Huang made the correct decisions to push the Nvidia where it is now:
    1. Forsee to gaming market as a "Billion". Build chip for gaming.
    2. Persist technology innovation to iterate the GPU.
    3. Make GPU general-purpose, become the hardware arsenal of new era.



Introduce GPU

Architecture

Let’s tear down a GTX 4080 GPU to see what’s inside.

GTX 4080 - Full card
Other OEMs:
GTX 4080 - Board part
Open the panel
PCB panel
ROG Strix pcb:
Focus the core


Other views:

Other card
Move closer
Low-end cards


Now, let’s focus on the core chip (Nvidia):

Exploded view (Tesla V100)
Abstract view
Fermi - with SM (Streaming Multiprocessor)
Streaming Multiprocessor from Cuda Cores
Kepler - open SM
Pascal - more SM
Volta - Tensor core
Ampere - RT core


⚖️ Compare with CPU

CPU vs. GPU

The CPU is suited to a wide variety of workloads, especially those for which latency or per-core performance are important. A powerful execution engine, the CPU focuses its smaller number of cores on individual tasks and on getting things done quickly. This makes it uniquely well equipped for jobs ranging from serial computing to running databases.

GPUs began as specialized ASICs developed to accelerate specific 3D rendering tasks. Over time, these fixed-function engines became more programmable and more flexible. While graphics and the increasingly lifelike visuals of today’s top games remain their principal function, GPUs have evolved to become more general-purpose parallel processors as well, handling a growing range of applications.

CPU GPU
General purpose Specialized-purpose
Task parallelism Data parallelism
A few heavyweight cores Many lightweight cores
High memory size High memory throughput
Many diverse instruction sets A few highly optimized instruction sets
Explicit thread management Threads are managed by hardware



An examples

CPU is like a workshop with one or serveral craftsmans. They are well-skilled and can do anythings if you give him a blueprint and enough time.

GPU is like a pipeline with many workers. They are poorly-educated but can do the same thing in parallel. Given simple and specific guide, they can do the job very fast.


🔖 Read GPU Spec

To better understand how the GPU performs, we should learn to read the spec of GPU of the core metrics.

We can find the centralized parameter specs of GPUs at the 3rd party: https://www.techpowerup.com/gpu-specs/

Or the details in the official website of the GPU manufacturer (NVidia, AMD, and Intel), e.g.


Cores

Similar as the CPU, the GPU has cores. The cores is used for parallel computing. Different from the CPU which has up-to 48 cores, the GPU has up-to 10,000 cores.

Chip Cores Clock
Intel Core i9-13900K 24 5.8 GHz (Turbo)
AMD Ryzen 9 7950X3D 16 5.7 GHz (Boost)
AMD 7900XTX 6144 2.5 GHz (Boost)
RTX 4070Ti 7680 2.61 GHz (Boost)
RTX 4090 16384 2.52 GHz (Boost)
Tesla H100 14592 1.845 GHz (Boost)


CUDA Cores (Nvidia)

Generally, The GPU cores are the shading units for rendering pipeline. But for Nvidia, it is called CUDA Cores with the strength of parallel computing with cores.

CUDA

CUDA (Compute Unified Device Architecture) is the official name of GPGPU. Now it is used as the Nvidia core name and the most popular API on GPGPU.


Tensor Cores (Nvidia)

Essentially, Tensor cores are processing units that accelerate the process of matrix multiplication.

The computational complexity increases multifold as the size and dimensions of the matrix (tensor) go up. Machine Learning, Deep learning, Ray Tracing are tasks that involve an excessive amount of multiplication.


Ampere - Tensor core


RT Cores

Known as “Ray Tracing Cores”. It is a hardware implementation of the ray tracing technique.

Ray tracing calculation is a specific rendering pattern with ray related vector calcutions, refer to Ray Tracing notes

In short, RT cores add extra circuits to the more general purpose CUDA cores that can be included in the rendering pipeline when a ray-tracing calculation comes along.

Ray tracing Demo: [NVIDIA Marbles at Night RTX Demo](https://youtu.be/NgcYLIvlp_k?si=GW1jlgYrbVaG0b3I)


Bus, Clock & Memory

The specs of bus, clock & memory
Bus

The bus is the connection between the GPU and the motherboard. It is the data highway between the GPU and the CPU.

Clock Speed

The clock speed is the speed of the GPU. (in MHz)

Just like the CPU. It contains the core clock speed and the memory clock speed.

Memory

The memory of GPU is called VRAM (Video RAM).


Shader & TMU & ROP

For rendering pipeline:


TFLOPS

TFLOPS (teraFLOPS) is the tera (10^12) FLoating point Operations Per Second.

Generally, we say in 32-bit floating point.

GFLOPS, as we can guess, is the giga (10^9) of FLOPS. It was used in years ago and now we are in TFLOPS era.

Cross-platform’s battle of TFLOPS in their graphics core:
Platform TFLOPS
PS5 10.28
XBOX Series X 12.00
Nintendo Switch 0.4 / 0.5 (Docked)
Apple A17 Pro 2.15
Apple M2 Ultra (76 core) 27.2
Intel UHD Graphics 770 0.794
Intel Iris Xe Graphics G7 96RUs 1.690
Intel Arc A770 19.66
AMD Radeon RX 7800 XT 37.32
AMD Radeon RX 7900 XTX 61.42
GeForce RTX 2080 Ti 13.45
GeForce RTX 3090 35.58
GeForce RTX 4070 Ti 40.09
GeForce RTX 4090 82.58
Tesla H100 67


Example

Now, let’s read a spec of RTX 4090:

GeForce RTX 4090 GPU specs


How GPU runs

CPU runs code

Traditional GPU runs code

GPGPU

In general purpose GPU architecture, Nvidia leverages the CUDA cores (typically 128 cores) to constructs SM (Streaming Multiprocessor) to run the code.

GTX 980 (2014 - Maxwell)



Program with GPU

OpenGL & GLSL

OpenGL (Open Graphics Library) is a cross-language, cross-platform API for rendering 2D and 3D vector graphics.

GLSL (OpenGL Shading Language), is a high-level shading language based on the C programming language.

WebGL & WebGPU

WebGL (Web Graphics Library) is a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins.

CUDA

CUDA is a parallel computing platform and programming model developed by Nvidia for general computing on its own GPUs (graphics processing units).

Different from the OpenGL, CUDA is a general-purpose parallel computing platform and programming model. OpenGL focuses on the graphic rendering, while CUDA is for parallel computing. (e.g. Deep Learning, Crypto Mining)

C sample:

// CUDA C
__global__ void myKernel() {
  printf("Hello world\n");
}

int main(int argc, char const *argv[]) {
    myKernel<<<4,2>>>();
    return 0;
}

Python sample:

# CUDA Python by numba

from numba import cuda

def cpu_print(N):
    for i in range(0, N):
        print(i)

@cuda.jit
def gpu_print(N):
    idx = cuda.threadIdx.x + cuda.blockIdx.x * cuda.blockDim.x 
    if (idx < N):
        print(idx)

def main():
    print("gpu print:")
    gpu_print[2, 4](8)
    cuda.synchronize()
    print("cpu print:")
    cpu_print(8)

if __name__ == "__main__":
    main()

Recommendation