• Cart
  • 1.800.576.7931

AI & Deep Learning Solution

Save 10%

NGC-Ready Systems

Supermicro Validated NVIDIA GPU Cloud (NGC) Servers. Please check NVIDIA List of NGC-Ready Systems


Training Inferencing
Datacenter Edge

SYS-4029GP-TVRT SYS-2029GP-TR SYS-2029GP-TR SYS-1019D-FHN13TP SYS-E403-9P-FN2T SYS-5019D-FN8TP
CPU 2 * Xeon Scalable 2 * Xeon Scalable 2 * Xeon Scalable 1 * Xeon D 1 * Xeon D 1 * Xeon D
Accelerator 8 * V100 NVLink 2 * V100 4 * T4 2 * T4 2 * T4 1 * T4
Memory 6TB 4TB 4TB 512GB 512GB 512GB
Storage 16 * SATA3; 8 * NVMe 8 * SATA3; 2 * NVMe 8 * SATA3; 2 * NVMe 4 * SATA3 4 * SATA3 4 * SATA3
Network 100GbE, GPU Direct RDMA 100GbE, GPU Direct RDMA 100GbE, GPU Direct RDMA 10GbE, 10GSFP+ 10GbE, 10GSFP+ 10GBase-T, 10GSFP+
Form Factor 4U Rackmount 2U Rackmount 2U Rackmount Short Depth Box PC, Short Depth Wall mount

AI & Deep Learning Solution Ready Servers

1029GQ-TVRT

1029GQ-TVRT
  • HPC, Artificial Intelligence, Big Data Analytics, Research Lab, Astrophysics, Business Intelligence
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; dual UPI up to 10.4GT/s
  • 12 DIMMs; up to 3TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM
  • 2 Hot-swap 2.5" drive bays, 2 Internal 2.5" drive bays
  • 4 PCI-E 3.0 x16 slots
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 2000W Redundant Titanium Level (96%) Power Supplies
Ask for a quote

4029GP-TVRT

4029GP-TVRT
  • Artificial Intelligence, Big Data Analytics, High-performance Computing, Research Lab/National Lab, Astrophysics, Business Intelligence
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • 16 Hot-swap 2.5" drive bays (support 8 NVMe drives)
  • 4 PCI-E 3.0 x16 (LP, GPU tray for GPUDirect RDMA), 2 PCI-E 3.0 x16 (LP, CPU tray)
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 2200W (2+2) Redundant Titanium Level (96%) Power Supplies
Ask for a quote

6049GP-TRT

6049GP-TR
  • AI/Deep Learning, Video Transcoding
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
  • Supports Intel® Optane™ DCPMM
  • 24 Hot-swap 3.5" drive bays, 2 optional 2.5" U.2 NVMe drives
  • 20 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 (FHFL, in x16 slot)
  • 2x 10GBase-T ports via Intel C622, 1 Dedicated IPMI port
  • 2200W (2+2) Redundant Titanium Level (96%) Power Supplies
Ask for a quote

9029GP-TNVRT

9029GP-TNVRT
  • AI/Deep Learning, High-performancea Computing
  • Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
  • 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM Supports Intel® Optane™ DCPMM*
  • 16 Hot-swap 2.5" NVMe drive bays, 6 Hot-swap 2.5" SATA3 drive bays
  • 16 PCI-E 3.0 x16 slots for RDMA via IB EDR, 2 PCI-E 3.0 x16 on board
  • 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
  • 6x 3000W Redundant Titanium Level (96%) Power Supplies
Ask for a quote

AI & Deep Learning Platform

AI & Deep Learning Software Stack
Deep Learning Environ­ment Frameworks Caffe, Caffe2, Caffe-MPI, Chainer, Microsoft CNTK, Keras, MXNet, TensorFlow, Theano, PyTorch
Libraries cnDNN, NCCL, cuBLAS
User Access NVIDIA DIGITS
Operating Systems Ubuntu, Docker, NVIDIA Docker

AI & Deep Learning Reference Architecture Configuration

Product SKU SRS-14UGPU-AIV1-01 SRS-24UGPU-AIV1-01
Compute Capability 2PFLOPS (GPU FP16) 4PFLOPS (GPU FP16)
Compute Node 2 SYS-4029GP-TVRT 4 SYS-4029GP-TVRT
Total GPUs 16 NVIDIA® Tesla® V100 SXM2 32GB HBM 32 NVIDIA® Tesla® V100 SXM2 32GB HBM
Total GPUs 512GB HBM2 1TB HBM2
Total CPU 4 Intel® Xeon® Gold 6154, 3.00GHz, 18-cores 8 Intel® Xeon® Gold 6154, 3.00GHz, 18-cores
Total System Memory 768GB DDR4-2666MHz ECC 3TB DDR4-2666MHz ECC
Networking InfiniBand EDR 100Gbps; 10GBASE-T Ethernet InfiniBand EDR 100Gbps; 10GBASE-T Ethernet
Total Storage* 15.2TB (8 SATA3 SSDs) 30.4TB (16 SATA3 SSDs)
Operating System Ubuntu Linux OS or CentOS Linux Ubuntu Linux OS or CentOS Linux
Software Caffe, Caffe2, Digits, Inference Server, PyTorch, NVIDIA® CUDA®, NVIDIA® TensorRT™, Microsoft Cognitive Toolkit (CNKT), MXNet, TensorFlow, Theano, and Torch Caffe, Caffe2, Digits, Inference Server, PyTorch, NVIDIA® CUDA®, NVIDIA® TensorRT™, Microsoft Cognitive Toolkit (CNKT), MXNet, TensorFlow, Theano, and Torch
Max Power Usage 7.2kW (7,200W) 14.0kW (14,000kW)
Dimensions 14 Rack Units, 600 x 800 x 1000 (mm, W x H x D) 24 Rack Units, 598 x 1163 x 1000 (mm, W x H x D)
Rack Diagram

Discuss How to Cost-Optimize AI & Deep Learning

Ask an Expert

© SERVERSDIRECT. ALL RIGHTS RESERVED.

Creating Your Custom Configurator
Uploading Component Details...

Equus Bot