NVIDIA GPU Servers for AI, Deep Learning | ASA Computers

Acceleration For Every Workload

GPU Computing Servers

Incredible performance for AI training & inference

ASA's GPU server platforms, powered by NVIDIA A100 / H100 Tensor Core GPUs, deliver unprecedented acceleration and flexibility for AI, data analytics, and HPC applications. The combination of massive GPU-accelerated compute, state-of-the-art server hardware, and software optimizations enable organizations to scale to hundreds or thousands of nodes to meet the biggest challenges of the next generation of AI applications.
NVIDIA Logo
Unified Architecture
Unified Architecture
A single architecture that accelerates modern applications across diverse workloads.
Enterprise-Grade Infrastructure
Enterprise-Grade Infrastructure
Confidently deploy scalable hardware and software solutions that securely and optimally run accelerated workloads with NVIDIA-Certified systems.
Cost Savings
Cost Savings
Full-stack innovation across hardware and software for faster ROI.
EDU Discounts
EDU Discounts
Additional discounts available for universities.

Product Offerings

Enterprise adoption of AI is now mainstream, and organizations need end-to-end, AI-ready infrastructure that will accelerate them into this new era. Our GPU servers are designed to provide optimal performance for the widest range of applications. Do you have a specific application requirement? Our Solution Experts can help find the right hardware solution to meet your needs or check out our complete offerings here.

DELL POWEREDGE XE9680 GPU SERVER

  • 6U 8-way GPU server
  • Two 4th Gen Intel® Xeon® Scalable processor with up to 56 cores per processor
  • 8x NVIDIA H100 700W SXM5 for extreme performance or 8x NVIDIA A100 500W SXM4 GPUs, fully interconnected with NVIDIA NVLink technology
  • Up to 10 x16 Gen5 PCIe full-height, half-length
GET QUOTE

2U High Density 4-Bay 4X GPU Server

SKU: ASA2112-X2-R

  • Supports 3rd Gen Intel® Xeon® Scalable Processors
  • Supports NVIDIA HGX™ A100 with 4x 40GB/80GB GPU
  • Up to 4x PCI-E Gen 4.0 X16 LP Slots
  • Supports NVIDIA® NVLink® and NVSwitch™ technology
GET QUOTE

2U High Density 4-Bay 4X GPU EPYC Server

SKU: ASA2113-EP2-R

  • Supports AMD EPYC™ 7003/7002 Series Processors
  • Supports NVIDIA HGX™ A100 with 4x 40GB/80GB GPU
  • Up to 4x PCI-E Gen 4.0 X16 LP Slots
  • Supports NVIDIA® NVLink® and NVSwitch™ technology
GET QUOTE

Supermicro SYS-421GU-TNXR 4U 4x GPU Server

SKU: ASA4105-X2-R

  • Supports dual Intel® 4th Gen Xeon® Scalable processors
  • 4 x NVIDIA HGX H100 SXM5
  • 6x 2.5" hot-swap NVMe/SATA drive bays
  • 2 M.2 NVMe OR 2 M.2 SATA3
GET QUOTE

4U AI 8X GPU EPYC Server

SKU: ASA4104-EP2-R

  • Supports AMD EPYC™ 7003 Series Processors
  • Supports NVIDIA HGX™ A100 with 8x SXM4 GPU
  • Supports NVIDIA® NVLink® and NVSwitch™ technology
  • 8-Channel RDIMM/LRDIMM DDR4 per processor, 32 x DIMMs
GET QUOTE

Supermicro SYS-521GU-TNXR 5U 4x GPU Server

SKU: ASA5101-X2-R

  • Supports dual Intel® 4th Gen Xeon® Scalable processors
  • 4 x NVIDIA HGX H100 SXM5
  • 10 x 2.5" hot-swap NVMe/SATA drive bays
  • 2 x M.2 NVMe OR 2 M.2 SATA3
GET QUOTE

Supermicro A+ Server 8125GS-TNHR 8U 8x GPU Server

SKU: ASA8102-EP2-R

  • Supports dual AMD EPYC™ 9004 series Processors
  • 8x NVIDIA® HGX™ H100 SXM5
  • 12 x 2.5" hot-swap NVMe drive bay
  • 2 x 2.5" hot-swap SATA
  • 1 x M.2 NVMe for boot drive
GET QUOTE

Supermicro SYS-741GE-TNRT 4x GPU Workstation

SKU: ASA9104-X2-R

  • Supports dual Intel® 4th Gen Xeon® Scalable processors
  • 4 x NVIDIA GPUs
  • 8 x 3.5" hot-swap NVMe/SATA/SAS drive bays
  • 2 x M.2 NVMe
GET QUOTE
NVIDIA H100 GPU cards

Featuring NVIDIA H100 Tensor Core GPU

The GPU For Accelerated Computing

The NVIDIA® H100 Tensor Core GPU, based on NVIDIA Hopper™ architecture, delivers unprecedented acceleration to power the world's highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. Accelerated servers with H100 deliver unparalleled compute power-along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch™- you can now tackle data analytics with high performance and scale to support massive datasets.

Transformer Engine

The Transformer Engine uses software and Hopper Tensor Core technology designed to accelerate training. Hopper Tensor Cores can apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers.

NVLink Switch System

The NVLink Switch System enables the scaling of multi-GPU input/output (IO) across multiple servers at 900 gigabytes per second (GB/s) bidirectional per GPU, over 7X the bandwidth of PCIe Gen5.

Multi-Instance GPU (MIG)

The Hopper architecture's second-gen MIG supports multi-tenant, multi-user configurations in virtualized environments, securely partitioning the GPU into isolated, right-size instances.

DPX Instructions

Hopper's DPX instructions accelerate dynamic programming algorithms by 40X compared to CPUs and 7X compared to NVIDIA Ampere architecture GPUs.

Optimized AI Software Stack

We provide customized deep learning framework installation on our GPU platforms for an end-to-end integrated solution.

DEEP LEARNING FRAMEWORK

Anaconda logo
Caffe logo
cuDNN logo
Docker logo
Keras logo
MathWorks logo
Microsoft Cognitive Toolkit logo
mxnet logo
NVIDIA Digits logo
NVIDIA CUDA logo
PyTorch logo
Apache Spark logo
theano logo
TensorFlow logo
Ubuntu logo

SUPPORTS 2000+ GPU-ACCELERTED APPLICATIONS

AMBER
GAUSSIAN
LS-DYNA
OpenFOAM
VASP
ANSYS Fluent
GROMACS
NAMD
Simulia Abaqus
WRF

Need to get in touch with our solution experts?

This website uses cookies to ensure you get the best experience on our website. More info. Got it!