• 검색 결과가 없습니다.

GPU , NVIDIA DGX SUPERPOD

N/A
N/A
Protected

Academic year: 2022

Share "GPU , NVIDIA DGX SUPERPOD"

Copied!
45
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

, Developer Relations Manager, NVIDIA Korea, 2021 3

GPU ,

NVIDIA DGX SUPERPOD

(2)

2

132

54 36

26

10 5

중간관리 팀장

실무직 임원 기타 대표

(3)

3

63

22

13 7

2 7 40

36 31

17

12 4 3 6

IT기획 영업

경영|전략 교육|인사 마케팅 회계|재무

시스템운영

엔지니어|프로그래머 연구|개발

네트워크 보안

고객지원|서비스 데이터처리|분석

기타

(4)

4

GPU / ( )

1~2 GPU

4~8 GPU

20 + 1~2 GPU

GPU

4~8 GPU

20 +

GPU

(5)

5

NVIDIA DGX FAMILY

DGX A100

A100 8-GPU 320GB/640GB DGX POD/SuperPOD

4x/8x/…/20x/…/140x DGX A100

DGX Station A100

A100 4-GPU 160GB/320GB

w/ NVIDIA A100 GPU 40GB/80GB

OEM OEM OEM

(6)

6

– GPU

GPT-3 ?

( – - )

?

OpenAI ( )

GPT

GPT-1 (2018 ; 1 1 7 )

GPT-2 (2019 ; 15 )

GPT-3 (2020 ; 1750 )

(7)

7

NVIDIA Maxine (AI-powered Video Conferencing Platform)

Inserting video: Insert/Video/Video from File.

Insert video by browsing your directory and selecting OK.

File types that works best in PowerPoint are mp4 or wmv

(8)

8

NVIDIA Clara Guardian (Virtual Patient Assistan)

Inserting video: Insert/Video/Video from File.

Insert video by browsing your directory and selecting OK.

File types that works best in PowerPoint are mp4 or wmv

(9)

9

DEEP LEARNING NETWORK MODEL

ResNet vs. GPT

Parameter # of ResNet50

Model GPU Memory Parameter # of GPT-2

: Paper, An End-to-End Framework for Constrained Deep Learning Model Optimization, Jan 2021

: Paper, Megatron LM: Training Multi-Billion Parameter Language Models Using Model Parallelism, Mar 2020

(10)

10

(NLP; Natural Language Processing)

Model

2017 2018 2019 2020 2021

GPT-3 175 Bn

2017 2018 2019 2020

Turing-NLG 17 Bn

BERT 340 M Transformer

65 M

GPT-2 8B 8.3 Bn

# Paramet ers (Log sc ale)

(11)

11

GPU GPU MEMORY

최신 NVIDIA Data Center GPU 카드, GPU Memory 기준 정렬

Training/HPC/Inference Training/HPC Graphic Inference

GPU Memory

(GB)

GPU Memory

종류 A100 SXM4 A100 PCIe V100 V100S A40 RTX 8000 RTX 6000 T4

80 HBM2 6912 / 432 / 400 6912 / 432 / 250

48 GDDR6 10752 / 336 / 300 4608 / 576 / 250

40 HBM2 6912 / 432 / 400 6912 / 432 / 250

32 HBM2 5120 / 640 / 250 5120 / 640 / 300

32 GDDR5

24 GDDR6 4608 / 576 / 250

24 GDDR6X

16 HBM2 5120 / 640 / 250 5120 / 640 / 300

16 GDDR6 2560 / 320 / 70

10 GDDR6X

: <CUDA Core > / <Tensor Core > / < >

(12)

12

MODEL

Model GPU 1 Memory

: Data Parallel Training

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(13)

13

13

Untrained

Neural Network Model

Deep Learning

Framework

TRAINING

Learning a new capability from existing data

App or Service

Featuring Capability

INFERENCE

Applying this capability to new data

Trained Model

Optimized for Performance

DEEP LEARNING APPLICATION DEVELOPMENT

Trained Model

New Capability

(14)

14

(15)

15

(16)

16

Darken Sharpen

Brighten Blur

Original Image

0 0 0

0 1.5 0

0 0 0

0 0 0

0 0.5 0

0 0 0

.06 .13 .06 .13 .25 .13 .06 .13 .06

0 -1 0 -1 5 -1

0 -1 0

(17)

17

(28, 28,1) Image Input

(3, 3, 1, 2) Kernels

(28, 28, 2) Stacked Images

(3, 3, 2, 2) Kernels

(28, 28, 2) Stacked Images

(1568)

Flattened Image Vector

… …

… …

Output Prediction (10) (512)

Dense (512)

Dense

(18)

18

IMAGENET 1,000

(dog)

: imagenet1000_clsidx_to_labels.txt (https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a)

(19)

19

( ) vs.

Image Classification

(20)

20

GPT-3

GPT-3

(21)

21

21

Untrained

Neural Network Model

Deep Learning

Framework

TRAINING

Learning a new capability from existing data

App or Service

Featuring Capability

INFERENCE

Applying this capability to new data

Trained Model

Optimized for Performance

DEEP LEARNING APPLICATION DEVELOPMENT

Trained Model

New Capability

(22)

22

MODEL

Model GPU 1 Memory

: Data Parallel Training

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(23)

23

MODEL

Model GPU

: Data Parallel Training

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(24)

24

DATA PARALLELISM

Data Parallelism ?

GPU GPU weight Deep Learning Frameworks

) TensorFlow, PyTorch, Keras

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(25)

25

MODEL

Data Parallelism *Model Parallelism*

GPT-3

Parameter # = 175B

2017 2018 2019 2020 2021

GPT-3 175 Bn

2017 2018 2019 2020

Turing-NLG 17 Bn

BERT 340 M Transformer

65 M

GPT-2 8B 8.3 Bn

# Paramet ers (Log sc ale)

Training/HPC/Inference Training/HPC GPU

Memory (GB)

GPU Memory

종류 A100 SXM4 A100 PCIe V100 V100S

80 HBM2 6912 / 432 / 400 6912 / 432 / 250

48 GDDR6

40 HBM2 6912 / 432 / 400 6912 / 432 / 250

32 HBM2 5120 / 640 / 250 5120 / 640 / 300

32 GDDR5

24 GDDR6

24 GDDR6X

16 HBM2 5120 / 640 / 250 5120 / 640 / 300

16 GDDR6

10 GDDR6X

Data Center GPU

GPU Memory

(26)

26

MODEL PARALLELISM

Model Parallelism ?

Model GPU ( . Layer ) Deep Learning Framework

Data Scientist

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(27)

27

MODEL PARALLELISM DATA SCIENTIST

Model Parallelism Model Parallelism Pipeline

Model Parallelism + Data Parallelism Model Parallelism Pipeline

: Model Parallelism in Deep Learning is NOT What You Think

: Paper, Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platfrom, Oct 2019

(28)

28

MODEL PARALLELISM NLP

NVIDIA Applied Deep Learning Research

NVIDIA Developer Blog

https://developer.nvidia.com/blog/language-modeling-using-megatron-a100-gpu/ Paper

https://arxiv.org/abs/1909.08053

(29)

29

DATA SCIENTIST

( . GPU 140 , 130 , 120 )

Data Center life cycle “Bathtub” GPT-3

OpenAI White Paper

: The Bathtub Curve and Data Center Equipment Reliability : The GPT-3 economy

(30)

30

H/W, S/W ?

Data Scientist !

NVIDIA DGX SuperPOD

DGX A100 (140) + InfiniBand (166) + IB + + H/W + S/W

Data Scientist

VS ?

(31)

31

NVIDIA

NVIDIA

GPT-3

– Data Scientist NVIDIA Research

NVIDIA DevTech

(32)

32

NVIDIA

NVIDIA

GPT-3

– Data Scientist

+S/W –

NVIDIA Research NVIDIA DevTech

NVIDIA Professional Services

(33)

33

( ) – 20 + GPU

Compute Fabric Network Architecture

DGX A100 #1

HDR 1P 8 DGX A100 #2

HDR 1P 8 DGX A100 #20

HDR 1P 8

Leaf #2 Leaf #8

Leaf #10

Mellanox UFM

MUA9502H-2SF Mellanox UFM MUA9502H-2SF Mgmt Login

Scheduler Provisioning Mellanox QM8790

Spine #1 Spine #2 Spine #3 Spine #4 Spine #5

Mellanox QM8790

Leaf #1 Leaf #9

1 1 1 1

(34)

34

NVIDIA DGX SUPERPOD S/W

(35)

35

NVIDIA DGX SUPERPOD S/W STACK DEEPOPS

NVIDIA DeepOps – Ansible (GPU best practices)

(36)

36

NVIDIA PROFESSIONAL SERVICES (NVPS) ?

NVPS ?

NVIDIA Project Manager, NVIDIA Senior Engineer, NVIDIA Manager .

NVPS ?

– Pre-PO SoW Project initiation, Site Survey, Ready-to-use SKUs per system-size – H/W lead time tracking, End-to-end deployment for Server, Network

(physical/logical)

– Educate the local Customer and Partner, Knowledge transfer to NVIDIA Support, On-

site/Remote Health-checks

(37)

37

NVIDIA

NVIDIA

GPT-3

– Data Scientist

+S/W –

H/W – interconnect

NVIDIA Research NVIDIA DevTech

NVIDIA Professional Services

NVIDIA DGX A100 w/ 8 IB adtr

(38)

38

( ) – 20 + GPU

Compute Fabric Network Architecture

DGX A100 #1

HDR 1P 8 DGX A100 #2

HDR 1P 8 DGX A100 #20

HDR 1P 8

Leaf #2 Leaf #8

Leaf #10

Mellanox UFM

MUA9502H-2SF Mellanox UFM MUA9502H-2SF Mgmt Login

Scheduler Provisioning Mellanox QM8790

Spine #1 Spine #2 Spine #3 Spine #4 Spine #5

Mellanox QM8790

Leaf #1 Leaf #9

1 1 1 1

(39)

39

– GPU

NVIDIA DGX A100 – Compute Fabric 및 Storage Fabric 위한 Network Adapter 구성

39

Mellanox ConnectX-6 2-port VPI InfiniBand HDR (Storage) Ethernet 100Gbps (In-band)

Mellanox ConnectX-6 1-port VPI InfiniBand HDR 200Gbps

Storage Storage

In-band

(40)

40

INFINIBAND NETWORK BANDWIDTH

: NVIDIA Developer Blog, Scaling Deep Learning Training with NCCL, Sep 2018

(41)

41

GPU

Data Parallelism Model Parallelism !

GPU weight layer , GPU

(42)

42

GPU / ( )

1~2 GPU

4~8 GPU

20 + 1~2 GPU

GPU

4~8 GPU

20 +

GPU

(43)

43

NVIDIA

NVIDIA

GPT-3

– Data Scientist

+S/W –

H/W – interconnect

NVIDIA Research NVIDIA DevTech

NVIDIA Professional Services

NVIDIA DGX A100 w/ 8 IB adtr

(44)

44

(45)

.

참조

관련 문서

As a specific educational method, we proposed the cultural elements teaching model and the model through the learning plan based on the prototype Moran-model, and took

Fifth, in addition to the structural model (alternative model) in which sports confidence and athlete satisfaction mediate in parallel from the path

This study is carried out vocabulary learning through image association as a method on efficient vocabulary learning for middle school students and wanted to know if

Through the endogenous financial development variables and financial market factors are introduced to the model, this paper use panel data to make an

Employer investment in apprenticeships and workplace learning: The fifth net benefits of training to employers study, BIS Research Paper

„ classifies data (constructs a model) based on the training set and the values (class labels) in a.. classifying attribute and uses it in

Coping with these global trends, KRIVET has conducted a variety of field research studies focusing on vocational education and training through lifelong learning,

` Partial modeling is not allowed, complete solid model should be made. `