Deployment Cell Amenities¶
Overview¶
Deployment cell amenities are additional infrastructure components (Helm charts) that can be installed and managed on your deployment cell to enhance functionality and provide essential services. Omnistrate provides both managed amenities and support for custom amenities to meet your specific infrastructure requirements.
This guide demonstrates how to manage deployment cell amenities using the Omnistrate command-line tool.
Prerequisites¶
Before managing deployment cell amenities, ensure you have:
- Omnistrate CTL installed (installation guide)
- Valid credentials for the Omnistrate platform
- Appropriate permissions to manage deployment cell
- Access to your target environment (e.g., PROD, STAGING)
Deployment cell support two types of amenities:
Amenities are the right place for components that are cluster-wide rather than instance-specific, such as monitoring stacks, CSI drivers, ingress controllers, shared Operators, and policy controllers. Managing them at the cell layer lets you upgrade them once per cluster instead of tying their lifecycle to every deployment instance.
Managed Amenities¶
Pre-configured Helm charts maintained by Omnistrate that are automatically available for installation. These amenities are:
- Fully managed: Omnistrate handles chart configuration, updates, and maintenance
- Cloud-optimized: Configured with best practices for each cloud provider
- Production-ready: Tested and validated for enterprise use
Common managed amenities include:
- Observability Prometheus: Monitoring and metrics collection
- Kubernetes Dashboard: Web-based Kubernetes cluster management interface
- External DNS: Automatic DNS record management for Kubernetes services
- Cert Manager: Automatic SSL certificate provisioning and renewal
- Nginx Ingress Controller: HTTP/HTTPS traffic routing and load balancing
Each cloud provider has additional managed amenities.
Custom Amenities¶
- Helm charts that you configure and manage yourself.
- Plain Kubernetes Manifests that you can deploy and manage on your deployment cell.
Execution Model
Omnistrate currently applies custom amenities in parallel. There is no ordering guarantee between two custom amenities in the same sync.
If one amenity depends on another resource being present first, such as a namespace, Secret, CRD, or image pull secret, use one of these patterns:
- Package the prerequisite resources into the same Helm chart so the chart owns the ordering.
- Run the rollout in two passes: first sync the prerequisite manifests, apply pending changes, then sync the dependent Helm amenities.
- Pre-create shared prerequisites such as image pull secrets with
KubernetesManifestamenities and keep dependent charts in a later sync.
Deletion and rollback use the same model
The same rendering and parallelism rules apply when amenities are removed or when you revert to an older template.
KubernetesManifestamenities are rendered again during removal so Omnistrate can determine what to delete.- If a manifest contains unresolved expressions, unsupported function chains, or values that no longer exist, the delete or revert workflow can block the entire sync.
- Keep manifest amenities deletion-safe. Prefer direct values or single secret substitutions rather than multi-step transforms.
Managing Deployment Cell Configuration Templates¶
Configuration templates define which amenities are available for a deployment cell on each cloud provider.
Note
By default Omnistrate uses a Configuration Template per cloud provider. If you need to define different templates per Environment please reach out to [email protected]
Template Updates and Existing Cells
Updating an organization-level amenities template changes the desired template for that cloud provider, but it does not immediately reconfigure existing deployment cells.
To roll those changes out to an existing cell:
- Sync the cell with the latest template using
omnistrate-ctl deployment-cell update-config-template --id <cell-id> --sync-with-template - Apply the pending changes using
omnistrate-ctl deployment-cell apply-pending-changes --id <cell-id>
Generate Configuration Template¶
Create a template with all available amenities for a specific cloud provider:
When prompted, select your login method and provide credentials.
Using Schema Validation¶
To simplify the definition of this specification file Omnistrate provide a JSON schema that can be used for validation. You can use the JSON schema in IDEs that use the YAML Language Server (eg: VSCode / NeoVim).
# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json
For IntelliJ replace the top line with the following line to set up the yaml schema
# $schema: https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json
Review Template Structure¶
The generated template contains two main sections:
# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json
managedAmenities:
- name: EFS CSI Driver
description: EFS CSI Driver
type: Helm
- name: AWS Load Balancer Controller
description: AWS Load Balancer Controller
type: Helm
- name: Cluster Autoscaler
description: Cluster Autoscaler
type: Helm
# ... additional managed amenities
customAmenities:
- name: EBS CSI Driver
description: EBS CSI Driver
type: Helm
properties:
ChartName: "aws-ebs-csi-driver"
ChartVersion: "2.28.1"
ChartRepoName: "aws-ebs-csi-driver"
ChartRepoURL: "https://kubernetes-sigs.github.io/aws-ebs-csi-driver"
CredentialsProvider:
Type: "none"
DefaultNamespace: "kube-system"
ChartValues:
image:
repository: "public.ecr.aws/ebs-csi-driver/aws-ebs-csi-driver"
tag: "v1.27.0"
controller:
replicaCount: 2
resources:
requests:
cpu: "10m"
memory: "40Mi"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: omnistrate.com/control-plane
operator: Exists
tolerations:
- key: CriticalAddonsOnly
value: "true"
effect: NoSchedule
Warning
Omnistrate creates a system node pool that is independent of the service plan node pool. To allow Operators and Controllers to run on the system node pool, affinity rules and tolerations must be defined as shown in the example and explained in the Affinity rules and tolerations section.
Helm Chart Amenities¶
When defining custom Helm amenities, include these properties:
| Property | Description | Required |
|---|---|---|
ChartName | Name of the Helm chart | Yes |
ChartVersion | Specific version of the chart | Yes |
ChartRepoName | Repository name identifier | Yes |
ChartRepoURL | Repository URL | Yes |
CredentialsProvider | Authentication configuration | No |
DefaultNamespace | Kubernetes namespace for deployment | No |
ChartValues | Custom Helm values to override defaults | No |
ReleaseName | Optional custom release name for Helm chart. This setting is useful in case you need to install multiple versions of the same chart | No |
Example: install the same chart twice
customAmenities:
- name: opentelemetry-collector-agents
description: Node-level collectors
type: Helm
properties:
ChartName: opentelemetry-collector
ChartVersion: 0.127.2
ChartRepoName: opentelemetry-community
ChartRepoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
DefaultNamespace: observability
ReleaseName: opentelemetry-collector-agents
- name: opentelemetry-collector-gateway
description: Central gateway
type: Helm
properties:
ChartName: opentelemetry-collector
ChartVersion: 0.127.2
ChartRepoName: opentelemetry-community
ChartRepoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
DefaultNamespace: observability
ReleaseName: opentelemetry-collector-gateway
Kubernetes Manifest Amenities¶
The KubernetesManifest amenity type allows you to deploy custom Kubernetes manifests to deployment cells. This is useful for deploying Secrets, ConfigMaps, or any other Kubernetes resources that are not covered by Helm chart amenities.
When defining Kubernetes manifest amenities, include these properties:
| Property | Type | Required | Description |
|---|---|---|---|
manifests | array | Yes | List of manifest entries to deploy |
Each entry in the manifests array must have exactly one of the following:
| Property | Type | Description |
|---|---|---|
def | object | Inline Kubernetes manifest definition |
file | string | Path to a YAML file containing the manifest |
Example: Inline ConfigMap
customAmenities:
- name: app-config
description: Application configuration
type: KubernetesManifest
properties:
manifests:
- def:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: default
data:
DATABASE_HOST: postgres.default.svc.cluster.local
LOG_LEVEL: info
Example: Inline Secret
customAmenities:
- name: app-secrets
description: Application secrets
type: KubernetesManifest
properties:
manifests:
- def:
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: default
type: Opaque
stringData:
API_KEY: my-api-key
Example: Docker Registry Pull Secret
When a chart depends on a private registry, it is often simpler to pre-create the pull secret as a KubernetesManifest amenity and then reference it from the Helm chart values. Store the base64-encoded username:password value in a single Omnistrate secret such as DOCKERHUB_AUTH_B64 and inject it directly into the manifest:
customAmenities:
- name: ingress-nginx-namespace
description: Namespace for ingress components
type: KubernetesManifest
properties:
manifests:
- def:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
- name: ingress-nginx-pull-secret
description: Pull secret for Docker Hub
type: KubernetesManifest
properties:
manifests:
- def:
apiVersion: v1
kind: Secret
metadata:
name: dockerhub-pull-secret
namespace: ingress-nginx
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{"auths":{"docker.io":{"auth":"$secret.DOCKERHUB_AUTH_B64"}}}
Tip
Prefer stringData for Kubernetes Secrets when possible. It avoids manual base64 handling for the final manifest content and keeps the amenity definition easier to read.
Composite secret transforms
Amenities do not support chaining a secret lookup and another transform in the same token during manifest rendering.
For example, dynamically building base64(username:password) from two separate secrets inside the amenity can fail on both install and uninstall. Precompute the final value into one Omnistrate secret such as DOCKERHUB_AUTH_B64 and inject that directly.
Example: File References
customAmenities:
- name: external-manifests
description: Manifests from files
type: KubernetesManifest
properties:
manifests:
- file: ./manifests/configmap.yaml
- file: ./manifests/secret.yaml
File paths are resolved relative to the configuration file location.
Example: Mixed Inline and File References
customAmenities:
- name: mixed-manifests
description: Mixed inline and file manifests
type: KubernetesManifest
properties:
manifests:
- file: ./secret.yaml
- def:
apiVersion: v1
kind: ConfigMap
metadata:
name: runtime-config
namespace: default
data:
FEATURE_FLAG: "true"
Validation
The CLI validates KubernetesManifest amenities before sending them to the server:
- The
manifestsarray cannot be empty - Each entry must have either
fileordef, but not both - File references must point to valid, readable YAML files
- YAML content must be valid and parseable
Note
File references (file) are converted to inline definitions (def) by the CLI before being sent to the server API. The server only receives inline def entries, regardless of how they were specified in the configuration.
Affinity rules and tolerations¶
Omnistrate creates a system node pool that is independent of the service plan node pool. To allow Operators and Controllers to run on the system node pool, affinity rules and tolerations must be defined.
Affinity rules to prevent from running on service nodes:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: omnistrate.com/control-plane
operator: Exists
Toleration to allow to run on system nodes:
Update Configuration Template¶
After customizing your template, apply it to your organization:
View Current Configuration Template¶
Check the current configuration template for your organization:
For JSON output:
Managing Individual Deployment Cell Amenities¶
Check Deployment Cell Status¶
View the current amenities and status of a specific deployment cell:
For JSON output:
Replace hc-xxxxxxx with your actual deployment cell ID.
Update Deployment Cell Amenities¶
You have two options for updating amenities on a deployment cell:
Option 1: Update with Configuration File¶
Create a custom configuration file:
# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json
managedAmenities:
- name: Observability Prometheus
description: Observability Prometheus
type: Helm
- name: Kubernetes Dashboard
description: Kubernetes Dashboard
type: Helm
- name: External DNS
description: External DNS
type: Helm
customAmenities:
- name: Custom Chart
description: Custom application chart
type: Helm
properties:
ChartName: "my-custom-chart"
ChartVersion: "1.0.0"
ChartRepoURL: "https://my-repo.example.com"
Apply the configuration:
Option 2: Sync with Template¶
Automatically sync the deployment cell with your organization's template:
This command syncs the deployment cell's amenities with the configuration template defined for its environment.
Apply Pending Changes¶
After updating the configuration, apply the changes to trigger actual deployment:
This command:
- Applies any pending amenity additions or removals
- Triggers the deployment/undeployment of Helm charts
- Updates the deployment cell to match the desired configuration
Change Application
Configuration updates create pending changes that must be explicitly applied using the apply-pending-changes command. This two-step process provides control and allows you to review changes before deployment.
Recommended Patterns for Dependent Amenities¶
Use these patterns when a Helm amenity depends on resources such as namespaces, Secrets, or CRDs:
- If strict ordering is required, prefer a single Helm chart that owns both the prerequisite objects and the application resources.
- If you must separate them, create the prerequisite objects first with
KubernetesManifest, apply pending changes, and then run a second amenities sync for the dependent Helm charts. - If several charts share the same registry credentials, keep the namespace and image pull secret as dedicated manifest amenities and reuse them across charts.
- If the chart can create its own namespace, use
DefaultNamespaceand chart-native secret creation rather than relying on parallel amenities to line up.
Debugging Amenities Sync¶
When an amenities rollout is stuck or a new template cannot be applied because a sync is still in progress, inspect the active deployment-cell workflows first:
omnistrate-ctl deployment-cell workflow list hc-xxxxxxx
omnistrate-ctl deployment-cell workflow events hc-xxxxxxx <workflow-id>
These commands show whether the cell is still in DEPLOYMENT_CELL_SYNC_AMENITIES, which step is currently active, and the last debug event emitted by the workflow.
If a previous amenities sync is blocking a new update:
- If the sync appears as its own deployment-cell workflow in the UI, cancel that workflow there and rerun the update.
- If the cell sync was triggered indirectly by another workflow, inspect the deployment-cell workflow list first. The parent workflow can finish or fail while the cell sync is still active.
- If the workflow remains stuck and no cancel control is available, wait for the workflow timeout or contact support rather than layering more template changes on top of the same stuck sync.
If the workflow says a Helm amenity failed, connect to the deployment cell and inspect the releases directly:
omnistrate-ctl deployment-cell update-kubeconfig hc-xxxxxxx --role cluster-admin --kubeconfig /tmp/kubeconfig
export KUBECONFIG=/tmp/kubeconfig
helm ls -A
kubectl get pods -A
Use this flow when:
- A sync is still running even though the releases appear installed
- A reverted template still fails because a manifest or release is not reconciling cleanly
- A new
update-config-templateattempt is blocked by an older sync that is still in progress