Deployment Cell Node Pool Management¶
The deployment-cell CTL command now includes comprehensive node pool management capabilities for AWS, GCP, and Azure deployment cells. For more details, see the CTL documentation: Deployment Cells.
Why AWS cells can show many nodegroups¶
On AWS, Omnistrate may create distinct managed nodegroups for each effective combination of instance type, availability zone, and placement mode. Over time, especially after compute iteration, a cell can accumulate zero-sized nodegroups that are no longer hosting workload but still count toward EKS nodegroup quotas.
Use this page to identify, scale down, or delete stale nodegroups. For quota planning and quota increase recommendations, see Service Quotas.
When to Use Node Pool Management¶
CUSTOM_TENANCY Plans (Manual Management Required)¶
Node pool management commands are primarily designed for CUSTOM_TENANCY plans where workloads are deployed through Helm, Operators, or Kustomize. In these scenarios, you may need to manually manage node pools to:
- Scale down node pools to reduce infrastructure costs during off-hours or low-usage periods
- Clean up stale or inactive node pools that are no longer needed but consuming resources
- Optimize costs by removing node pools associated with terminated or inactive customer instances
These commands provide a single-command way to manage node pool capacity and cleanup, helping you control infrastructure costs effectively.
OMNISTRATE_DEDICATED_TENANCY and OMNISTRATE_MULTI_TENANCY Plans (Automatic Management)¶
For plans using OMNISTRATE_DEDICATED_TENANCY or OMNISTRATE_MULTI_TENANCY with resources imported through Docker Compose / Compose Spec, the platform automatically manages node pool lifecycle:
- Automatic scale-up and scale-down based on workload demand
- Automatic cleanup of node pools when instances are terminated
- No manual intervention required for cost optimization
You typically do not need to use these commands for automatically managed plans unless you have specific infrastructure management requirements.
Commands Overview¶
list-nodepools¶
List all node pools in a deployment cell with their configuration details.
Usage:
Options:
--id, -i: Deployment cell ID (required)--output, -o: Output format -table(default),text, orjson
Output Fields:
- Name: Node pool identifier
- Type: Entity type (NODEPOOL, NODE_GROUP, or AZURE_NODEPOOL)
- MachineType: Instance/VM type (e.g., n2-highmem-2, t3.medium, Standard_E2as_v6)
- ImageType: OS image type (e.g., COS_CONTAINERD, AL2_x86_64)
- MinNodes: Minimum autoscaling node count
- MaxNodes: Maximum autoscaling node count
- Location: Zone, availability zone, or subnet location
- AutoRepair: Automatic node repair enabled (GCP)
- AutoUpgrade: Automatic node upgrade enabled (GCP)
- AutoScaling: Autoscaling enabled (Azure)
- CapacityType: Node capacity type - ON_DEMAND or SPOT (AWS)
- PrivateSubnet: Whether nodes use private subnets
Examples:
# List all nodepools in table format
omnistrate-ctl deployment-cell list-nodepools --id hc-9c5ok6tmv
# List nodepools as JSON
omnistrate-ctl deployment-cell list-nodepools --id hc-9c5ok6tmv -o json
describe-nodepool¶
Get detailed information about a specific node pool, including current node count.
Usage:
omnistrate-ctl deployment-cell describe-nodepool --id <deployment-cell-id> --nodepool <nodepool-name>
Options:
--id, -i: Deployment cell ID (required)--nodepool, -n: Node pool name (required)--output, -o: Output format -table(default),text, orjson
Output Fields:
All fields from list-nodepools plus: - CurrentNodes: Current number of running nodes in the pool
Examples:
# Describe a GCP nodepool
omnistrate-ctl deployment-cell describe-nodepool \
--id hc-9c5ok6tmv \
--nodepool pt-uzdahfq76b-n2-highmem-2-a
# Describe an AWS nodegroup with JSON output
omnistrate-ctl deployment-cell describe-nodepool \
--id hc-9sp0n4418 \
--nodepool hc-9sp0n4418-pt-uzdahfq76b-r7i-large-us-east-1c \
-o json
# Describe an Azure nodepool
omnistrate-ctl deployment-cell describe-nodepool \
--id hc-nnqkjzz9j \
--nodepool izahdemsjqy9
scale-down-nodepool¶
Scale down a node pool to zero nodes for cost savings.
Usage:
omnistrate-ctl deployment-cell scale-down-nodepool --id <deployment-cell-id> --nodepool <nodepool-name>
Options:
--id, -i: Deployment cell ID (required)--nodepool, -n: Node pool name (required)
Behavior:
- Sets the node pool's maximum size to 0
- Evicts all running nodes in the pool
- Node pool configuration remains intact
- Can be reversed with
scale-up-nodepool - Useful for reducing costs during off-hours or low-usage periods
Examples:
# Scale down a GCP nodepool
omnistrate-ctl deployment-cell scale-down-nodepool \
--id hc-9c5ok6tmv \
--nodepool pt-uzdahfq76b-n2-highmem-2-a
# Scale down an AWS nodegroup
omnistrate-ctl deployment-cell scale-down-nodepool \
--id hc-9sp0n4418 \
--nodepool hc-9sp0n4418-r-4qouebzi1o-t3-medium-us-east-1c
scale-up-nodepool¶
Restore a node pool to its default maximum capacity of 450 nodes.
Usage:
omnistrate-ctl deployment-cell scale-up-nodepool --id <deployment-cell-id> --nodepool <nodepool-name>
Options:
--id, -i: Deployment cell ID (required)--nodepool, -n: Node pool name (required)
Behavior:
- Sets the node pool's maximum size to 450 (default for all clouds)
- Restores autoscaling capacity after scale-down
- Nodes are provisioned on-demand by the autoscaler as workloads require
- Does not immediately create nodes - autoscaler manages actual node count
Examples:
# Scale up a previously scaled-down nodepool
omnistrate-ctl deployment-cell scale-up-nodepool \
--id hc-9c5ok6tmv \
--nodepool pt-uzdahfq76b-n2-highmem-2-a
# Restore an AWS nodegroup to default capacity
omnistrate-ctl deployment-cell scale-up-nodepool \
--id hc-9sp0n4418 \
--nodepool hc-9sp0n4418-pt-uzdahfq76b-r7i-large-us-east-1c
delete-nodepool¶
Permanently delete a node pool from a deployment cell.
Usage:
Options:
--id, -i: Deployment cell ID (required)--nodepool, -n: Node pool name (required)
Behavior:
- Permanently deletes the node pool configuration
- Evicts all nodes and removes the pool from the cluster
- Operation can take up to 10 minutes
- Shows a spinner during deletion
- Cannot be reversed - use
scale-down-nodepoolif you want to preserve the configuration
Examples:
# Delete a GCP nodepool
omnistrate-ctl deployment-cell delete-nodepool \
--id hc-9c5ok6tmv \
--nodepool pt-uzdahfq76b-n2-highmem-2-a
# Delete an AWS nodegroup
omnistrate-ctl deployment-cell delete-nodepool \
--id hc-9sp0n4418 \
--nodepool hc-9sp0n4418-r-4qouebzi1o-t3-medium-us-east-1c
Common Workflows¶
Cost Optimization¶
# Scale down nodepools during off-hours
omnistrate-ctl deployment-cell scale-down-nodepool --id hc-xyz --nodepool my-nodepool
# Scale up when traffic returns
omnistrate-ctl deployment-cell scale-up-nodepool --id hc-xyz --nodepool my-nodepool
Node Pool Lifecycle¶
# 1. View existing nodepools
omnistrate-ctl deployment-cell list-nodepools --id hc-xyz
# 2. Get details on a specific node pool to get the current node count
omnistrate-ctl deployment-cell describe-nodepool --id hc-xyz --nodepool my-nodepool
# 3. Scale down for cost savings
omnistrate-ctl deployment-cell scale-down-nodepool --id hc-xyz --nodepool my-nodepool
# 4. Later, restore capacity
omnistrate-ctl deployment-cell scale-up-nodepool --id hc-xyz --nodepool my-nodepool
# 5. Or permanently remove if no longer needed
omnistrate-ctl deployment-cell delete-nodepool --id hc-xyz --nodepool my-nodepool