GPU Accelerator Configuration Guide¶
GPU accelerator configuration enables you to specify dedicated GPU resources for your SaaS Products. This feature allows your services to leverage hardware acceleration for AI/ML workloads, high-performance computing, graphics processing, and other GPU-intensive tasks.
This guide covers two ways to use GPUs with your SaaS Products on Google Cloud Platform:
- External GPU attachment using
acceleratorConfiguration
(N1 instances + Tesla GPUs only) - Built-in GPU instances (G2, A2, A3, A4 families with modern GPUs)
External GPU Attachment with acceleratorConfiguration¶
The acceleratorConfiguration
feature is part of the configurationOverrides
section in your service compute configuration. It provides a declarative way to specify the type and count of GPU accelerators that should be attached to your compute instances.
Critical Limitations
External GPU attachment via acceleratorConfiguration
is only supported with:
- **Instance Family**: N1 instances only
- **GPU Types**: Tesla series GPUs only (T4, V100, P100, P4)
- **Cloud Provider**: Google Cloud Platform only
All other instance families (E2, C2, M1, N2, etc.) **cannot** attach external GPUs.
Configuration Properties¶
Each accelerator configuration requires:
- Type: The Tesla GPU accelerator type (required)
- Count: Number of GPU accelerators to attach (required, minimum: 1)
Supported Accelerator Types¶
These are the only GPU types that can be attached to N1 instances:
nvidia-tesla-t4
- NVIDIA Tesla T4 (16GB GDDR6)nvidia-tesla-v100
- NVIDIA Tesla V100 (16GB/32GB HBM2)nvidia-tesla-p100
- NVIDIA Tesla P100 (16GB HBM2)nvidia-tesla-p4
- NVIDIA Tesla P4 (8GB GDDR5)nvidia-tesla-p4-vws
- NVIDIA Tesla P4 Virtual Workstation (8GB GDDR5)nvidia-tesla-t4-vws
- NVIDIA Tesla T4 Virtual Workstation (16GB GDDR6)nvidia-tesla-p100-vws
- NVIDIA Tesla P100 Virtual Workstation (16GB HBM2)
Examples¶
Basic Tesla T4 configuration (Docker Compose Style)¶
x-omnistrate-compose-spec:
services:
gpu-service:
x-omnistrate-compute:
instanceTypes:
- name: n1-standard-8
cloudProvider: gcp
configurationOverrides:
acceleratorConfiguration:
type: "nvidia-tesla-t4"
count: 1
High-performance Tesla V100 configuration (Omnistrate Spec)¶
services:
- name: gpu-v100-service
compute:
instanceTypes:
- name: n1-standard-4
cloudProvider: gcp
configurationOverrides:
acceleratorConfiguration:
type: "nvidia-tesla-v100"
count: 1
Multi-GPU setup with Tesla T4¶
services:
- name: gpu-cluster
compute:
instanceTypes:
- name: n1-standard-16
cloudProvider: gcp
configurationOverrides:
acceleratorConfiguration:
type: "nvidia-tesla-t4"
count: 4
Built-in GPU Instances (Alternative Approach)¶
If you need modern GPUs (L4, A100, H100), you should use built-in GPU instance families instead of acceleratorConfiguration
. These instances come with GPUs pre-installed and offer better performance.
No Configuration Required
Built-in GPU instances do not use acceleratorConfiguration
. Simply specify the instance type - the GPUs are already included.
Some Available Built-in GPU Instance Families¶
G2 Family - NVIDIA L4 GPUs¶
- GPU: NVIDIA L4 (24GB GDDR6)
- Use Cases: AI inference, machine learning, graphics workloads
- Instance Types:
g2-standard-4
,g2-standard-8
,g2-standard-16
, etc.
A2 Family - NVIDIA A100 GPUs¶
- GPU: NVIDIA A100 (40GB HBM2e)
- Use Cases: High-performance training, large-scale inference
- Instance Types:
a2-highgpu-1g
,a2-highgpu-2g
,a2-highgpu-4g
, etc.
A3 Family - NVIDIA H100 GPUs¶
- GPU: NVIDIA H100 (80GB HBM3)
- Use Cases: Advanced AI training, large language models
- Instance Types:
a3-highgpu-8g
,a3-megagpu-8g
, etc.
A4 Family - NVIDIA L4 GPUs¶
- GPU: NVIDIA L4 (24GB GDDR6)
- Use Cases: AI inference, video processing
- Instance Types:
a4-highgpu-1g
,a4-highgpu-2g
, etc.
Built-in GPU Examples¶
Using G2 with built-in L4 GPUs¶
x-omnistrate-compose-spec:
services:
modern-gpu-service:
x-omnistrate-compute:
instanceTypes:
- name: g2-standard-8
cloudProvider: gcp
# No acceleratorConfiguration needed - L4 GPU is built-in
Using A2 with built-in A100 GPUs¶
services:
- name: training-service
compute:
instanceTypes:
- name: a2-highgpu-1g
cloudProvider: gcp
# No acceleratorConfiguration needed - A100 GPU is built-in
Using A3 with built-in H100 GPUs¶
services:
- name: llm-service
compute:
instanceTypes:
- name: a3-highgpu-8g
cloudProvider: gcp
# No acceleratorConfiguration needed - H100 GPUs are built-in