Skip to content

Deployment Cell Amenities

Overview

Deployment cell amenities are additional infrastructure components (Helm charts) that can be installed and managed on your deployment cell to enhance functionality and provide essential services. Omnistrate provides both managed amenities and support for custom amenities to meet your specific infrastructure requirements.

This guide demonstrates how to manage deployment cell amenities using the Omnistrate command-line tool.

Prerequisites

Before managing deployment cell amenities, ensure you have:

  • Omnistrate CTL installed (installation guide)
  • Valid credentials for the Omnistrate platform
  • Appropriate permissions to manage deployment cell
  • Access to your target environment (e.g., PROD, STAGING)

Deployment cell support two types of amenities:

Amenities are the right place for components that are cluster-wide rather than instance-specific, such as monitoring stacks, CSI drivers, ingress controllers, shared Operators, and policy controllers. Managing them at the cell layer lets you upgrade them once per cluster instead of tying their lifecycle to every deployment instance.

Managed Amenities

Pre-configured Helm charts maintained by Omnistrate that are automatically available for installation. These amenities are:

  • Fully managed: Omnistrate handles chart configuration, updates, and maintenance
  • Cloud-optimized: Configured with best practices for each cloud provider
  • Production-ready: Tested and validated for enterprise use

Common managed amenities include:

  • Observability Prometheus: Monitoring and metrics collection
  • Kubernetes Dashboard: Web-based Kubernetes cluster management interface
  • External DNS: Automatic DNS record management for Kubernetes services
  • Cert Manager: Automatic SSL certificate provisioning and renewal
  • Nginx Ingress Controller: HTTP/HTTPS traffic routing and load balancing

Each cloud provider has additional managed amenities.

Custom Amenities

  1. Helm charts that you configure and manage yourself.
  2. Plain Kubernetes Manifests that you can deploy and manage on your deployment cell.

Each custom amenity is defined with these top-level fields:

Field Type Required Description
name string Yes Unique name for the amenity within the template
description string No Human-readable description
type string Yes Helm or KubernetesManifest
properties object Yes Type-specific configuration. See Helm Chart Amenities and Kubernetes Manifest Amenities
dependsOn array No List of amenity names that must finish installing before this amenity starts. This is a top-level field — not inside properties. See Amenity Dependency Ordering

Execution Model

Omnistrate applies custom amenities in parallel by default. If one amenity depends on another resource being present first — such as a CRD, namespace, Secret, or operator — use the dependsOn field to declare the ordering. See Amenity Dependency Ordering for details and examples.

Alternatively, you can package prerequisite resources into the same Helm chart so the chart owns the ordering internally.

Deletion and rollback use the same model

The same rendering and parallelism rules apply when amenities are removed or when you revert to an older template.

  • KubernetesManifest amenities are rendered again during removal so Omnistrate can determine what to delete.
  • If a manifest contains unresolved expressions, unsupported function chains, or values that no longer exist, the delete or revert workflow can block the entire sync.
  • Keep manifest amenities deletion-safe. Prefer direct values or single secret substitutions rather than multi-step transforms.

Managing Deployment Cell Configuration Templates

Configuration templates define which amenities are available for a deployment cell on each cloud provider.

Note

By default Omnistrate uses a Configuration Template per cloud provider. If you need to define different templates per Environment please reach out to [email protected]

Template Updates and Existing Cells

Updating an organization-level amenities template changes the desired template for that cloud provider, but it does not immediately reconfigure existing deployment cells.

To roll those changes out to an existing cell:

  1. Sync the cell with the latest template using omnistrate-ctl deployment-cell update-config-template --id <cell-id> --sync-with-template
  2. Apply the pending changes using omnistrate-ctl deployment-cell apply-pending-changes --id <cell-id>

Generate Configuration Template

Create a template with all available amenities for a specific cloud provider:

omnistrate-ctl deployment-cell generate-config-template --cloud aws --output template-aws.yaml

When prompted, select your login method and provide credentials.

Using Schema Validation

To simplify the definition of this specification file Omnistrate provide a JSON schema that can be used for validation. You can use the JSON schema in IDEs that use the YAML Language Server (eg: VSCode / NeoVim).

# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json

For IntelliJ replace the top line with the following line to set up the yaml schema

# $schema: https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json

Review Template Structure

The generated template contains two main sections:

# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json

managedAmenities:
  - name: EFS CSI Driver
    description: EFS CSI Driver
    type: Helm
  - name: AWS Load Balancer Controller
    description: AWS Load Balancer Controller
    type: Helm
  - name: Cluster Autoscaler
    description: Cluster Autoscaler
    type: Helm
  # ... additional managed amenities

customAmenities:
  - name: EBS CSI Driver
    description: EBS CSI Driver
    type: Helm
    properties:
      ChartName: "aws-ebs-csi-driver"
      ChartVersion: "2.28.1"
      ChartRepoName: "aws-ebs-csi-driver"
      ChartRepoURL: "https://kubernetes-sigs.github.io/aws-ebs-csi-driver"
      CredentialsProvider:
        Type: "none"
      DefaultNamespace: "kube-system"
      ChartValues:
        image:
          repository: "public.ecr.aws/ebs-csi-driver/aws-ebs-csi-driver"
          tag: "v1.27.0"
        controller:
          replicaCount: 2
          resources:
            requests:
              cpu: "10m"
              memory: "40Mi"
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: omnistrate.com/control-plane
                    operator: Exists
          tolerations:
            - key: CriticalAddonsOnly
              value: "true"
              effect: NoSchedule

Warning

Omnistrate creates a system node pool that is independent of the service plan node pool. To allow Operators and Controllers to run on the system node pool, affinity rules and tolerations must be defined as shown in the example and explained in the Affinity rules and tolerations section.

Helm Chart Amenities

When defining custom Helm amenities, the properties object supports:

Property Description Required
ChartName Name of the Helm chart Yes
ChartVersion Specific version of the chart Yes
ChartRepoName Repository name identifier Yes
ChartRepoURL Repository URL Yes
CredentialsProvider Authentication configuration No
DefaultNamespace Kubernetes namespace for deployment No
ChartValues Custom Helm values to override defaults No
ReleaseName Optional custom release name for Helm chart. This setting is useful in case you need to install multiple versions of the same chart No

Example: install the same chart twice

customAmenities:
  - name: opentelemetry-collector-agents
    description: Node-level collectors
    type: Helm
    properties:
      ChartName: opentelemetry-collector
      ChartVersion: 0.127.2
      ChartRepoName: opentelemetry-community
      ChartRepoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
      DefaultNamespace: observability
      ReleaseName: opentelemetry-collector-agents

  - name: opentelemetry-collector-gateway
    description: Central gateway
    type: Helm
    properties:
      ChartName: opentelemetry-collector
      ChartVersion: 0.127.2
      ChartRepoName: opentelemetry-community
      ChartRepoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
      DefaultNamespace: observability
      ReleaseName: opentelemetry-collector-gateway

Kubernetes Manifest Amenities

The KubernetesManifest amenity type allows you to deploy custom Kubernetes manifests to deployment cells. This is useful for deploying Secrets, ConfigMaps, or any other Kubernetes resources that are not covered by Helm chart amenities.

When defining Kubernetes manifest amenities, the properties object supports:

Property Type Required Description
manifests array Yes List of manifest entries to deploy

Each entry in the manifests array must have exactly one of the following:

Property Type Description
def object Inline Kubernetes manifest definition
file string Path to a YAML file containing the manifest

Example: Inline ConfigMap

customAmenities:
  - name: app-config
    description: Application configuration
    type: KubernetesManifest
    properties:
      manifests:
        - def:
            apiVersion: v1
            kind: ConfigMap
            metadata:
              name: app-config
              namespace: default
            data:
              DATABASE_HOST: postgres.default.svc.cluster.local
              LOG_LEVEL: info

Example: Inline Secret

customAmenities:
  - name: app-secrets
    description: Application secrets
    type: KubernetesManifest
    properties:
      manifests:
        - def:
            apiVersion: v1
            kind: Secret
            metadata:
              name: app-secrets
              namespace: default
            type: Opaque
            stringData:
              API_KEY: my-api-key

Example: Docker Registry Pull Secret

When a chart depends on a private registry, it is often simpler to pre-create the pull secret as a KubernetesManifest amenity and then reference it from the Helm chart values. Store the base64-encoded username:password value in a single Omnistrate secret such as DOCKERHUB_AUTH_B64 and inject it directly into the manifest:

customAmenities:
  - name: ingress-nginx-namespace
    description: Namespace for ingress components
    type: KubernetesManifest
    properties:
      manifests:
        - def:
            apiVersion: v1
            kind: Namespace
            metadata:
              name: ingress-nginx

  - name: ingress-nginx-pull-secret
    description: Pull secret for Docker Hub
    type: KubernetesManifest
    properties:
      manifests:
        - def:
            apiVersion: v1
            kind: Secret
            metadata:
              name: dockerhub-pull-secret
              namespace: ingress-nginx
            type: kubernetes.io/dockerconfigjson
            stringData:
              .dockerconfigjson: |
                {"auths":{"docker.io":{"auth":"$secret.DOCKERHUB_AUTH_B64"}}}

Tip

Prefer stringData for Kubernetes Secrets when possible. It avoids manual base64 handling for the final manifest content and keeps the amenity definition easier to read.

Composite secret transforms

Amenities do not support chaining a secret lookup and another transform in the same token during manifest rendering.

For example, dynamically building base64(username:password) from two separate secrets inside the amenity can fail on both install and uninstall. Precompute the final value into one Omnistrate secret such as DOCKERHUB_AUTH_B64 and inject that directly.

Example: File References

customAmenities:
  - name: external-manifests
    description: Manifests from files
    type: KubernetesManifest
    properties:
      manifests:
        - file: ./manifests/configmap.yaml
        - file: ./manifests/secret.yaml

File paths are resolved relative to the configuration file location.

Example: Mixed Inline and File References

customAmenities:
  - name: mixed-manifests
    description: Mixed inline and file manifests
    type: KubernetesManifest
    properties:
      manifests:
        - file: ./secret.yaml
        - def:
            apiVersion: v1
            kind: ConfigMap
            metadata:
              name: runtime-config
              namespace: default
            data:
              FEATURE_FLAG: "true"

Validation

The CLI validates KubernetesManifest amenities before sending them to the server:

  • The manifests array cannot be empty
  • Each entry must have either file or def, but not both
  • File references must point to valid, readable YAML files
  • YAML content must be valid and parseable

Note

File references (file) are converted to inline definitions (def) by the CLI before being sent to the server API. The server only receives inline def entries, regardless of how they were specified in the configuration.

Affinity rules and tolerations

Omnistrate creates a system node pool that is independent of the service plan node pool. To allow Operators and Controllers to run on the system node pool, affinity rules and tolerations must be defined.

Affinity rules to prevent from running on service nodes:

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: omnistrate.com/control-plane
            operator: Exists

Toleration to allow to run on system nodes:

  tolerations:
    - key: CriticalAddonsOnly
      value: "true"
      effect: NoSchedule

Update Configuration Template

After customizing your template, apply it to your organization:

omnistrate-ctl deployment-cell update-config-template --cloud aws -f template-aws.yaml

View Current Configuration Template

Check the current configuration template for your organization:

omnistrate-ctl deployment-cell describe-config-template --cloud aws

For JSON output:

omnistrate-ctl deployment-cell describe-config-template --cloud aws --output json

Managing Individual Deployment Cell Amenities

Check Deployment Cell Status

View the current amenities and status of a specific deployment cell:

omnistrate-ctl deployment-cell describe-config-template --id hc-xxxxxxx

For JSON output:

omnistrate-ctl deployment-cell describe-config-template --id hc-xxxxxxx --output json

Replace hc-xxxxxxx with your actual deployment cell ID.

Update Deployment Cell Amenities

You have two options for updating amenities on a deployment cell:

Option 1: Update with Configuration File

Create a custom configuration file:

# yaml-language-server: $schema=https://api.omnistrate.cloud/2022-09-01-00/schema/deployment-cell-amenities-spec-schema.json

managedAmenities:
  - name: Observability Prometheus
    description: Observability Prometheus
    type: Helm
  - name: Kubernetes Dashboard
    description: Kubernetes Dashboard
    type: Helm
  - name: External DNS
    description: External DNS
    type: Helm

customAmenities:
  - name: Custom Chart
    description: Custom application chart
    type: Helm
    properties:
      ChartName: "my-custom-chart"
      ChartVersion: "1.0.0"
      ChartRepoURL: "https://my-repo.example.com"

Apply the configuration:

omnistrate-ctl deployment-cell update-config-template --id hc-xxxxxxx -f custom-config.yaml

Option 2: Sync with Template

Automatically sync the deployment cell with your organization's template:

omnistrate-ctl deployment-cell update-config-template --id hc-xxxxxxx --sync-with-template

This command syncs the deployment cell's amenities with the configuration template defined for its environment.

Apply Pending Changes

After updating the configuration, apply the changes to trigger actual deployment:

omnistrate-ctl deployment-cell apply-pending-changes --id hc-xxxxxxx

This command:

  • Applies any pending amenity additions or removals
  • Triggers the deployment/undeployment of Helm charts
  • Updates the deployment cell to match the desired configuration

Change Application

Configuration updates create pending changes that must be explicitly applied using the apply-pending-changes command. This two-step process provides control and allows you to review changes before deployment.

Amenity Dependency Ordering

When one amenity depends on another resource being present first — a CRD, a namespace, a pull secret, or an operator — you can declare the relationship with dependsOn. Omnistrate guarantees that every listed dependency has finished installing before the dependent amenity starts.

Why use dependsOn

Without explicit ordering, all custom amenities are installed in parallel. This is fast but breaks when, for example, a Helm chart expects a CRD or a namespace that another amenity creates. dependsOn solves this without forcing you to bundle unrelated resources into a single chart or run multiple manual syncs.

How it works

Add a dependsOn list to any custom amenity entry. Each value must be the exact name of another amenity (custom or managed) in the same template.

customAmenities:
  - name: cert-manager
    description: Certificate management
    type: Helm
    properties:
      ChartName: cert-manager
      ChartVersion: v1.17.2
      ChartRepoName: jetstack
      ChartRepoURL: https://charts.jetstack.io
      DefaultNamespace: cert-manager

  - name: cluster-issuer
    description: ClusterIssuer for Let's Encrypt
    type: KubernetesManifest
    dependsOn:
      - cert-manager
    properties:
      manifests:
        - def:
            apiVersion: cert-manager.io/v1
            kind: ClusterIssuer
            metadata:
              name: letsencrypt-prod
            spec:
              acme:
                server: https://acme-v02.api.letsencrypt.org/directory
                privateKeySecretRef:
                  name: letsencrypt-prod-key
                solvers:
                  - http01:
                      ingress:
                        class: nginx

In this example, cluster-issuer waits for cert-manager to finish installing before it is applied. Without dependsOn, both would start simultaneously, and the ClusterIssuer would fail because the cert-manager.io/v1 CRD does not yet exist.

Dependency rules

Rule Detail
Reference by name Each entry in dependsOn must match the name of another amenity in the same template.
Cross-type A Helm amenity can depend on a KubernetesManifest amenity and vice versa.
Multiple dependencies An amenity can list more than one dependency. It waits for all of them.
Chains Dependencies can be chained: A → B → C. C installs first, then B, then A.
No cycles Circular dependencies (A → B → A) are rejected at save time with a validation error.
Managed amenities as targets A custom amenity can depend on a managed amenity by name, but managed amenities cannot have dependsOn. This is useful to ensure a managed amenity is present and fully installed before a custom amenity starts. It also prevents accidental removal of the managed amenity while the custom amenity still needs it.
Independent amenities stay parallel Amenities without dependsOn still install in parallel.

Validation

Omnistrate validates the dependency graph when you save the configuration template. The following errors are caught before any deployment:

  • Unknown dependency: dependsOn references an amenity name that does not exist in the template.
  • Cyclic dependency: The dependency graph contains a cycle (e.g., A depends on B and B depends on A).
  • Managed amenity with dependsOn: Managed amenities do not support the dependsOn field.

Example: operator → CRD → application

A common three-layer pattern where an operator must be running before its CRDs can be applied, and an application chart depends on those CRDs:

customAmenities:
  - name: prometheus-operator
    description: Prometheus Operator
    type: Helm
    properties:
      ChartName: kube-prometheus-stack
      ChartVersion: 72.6.2
      ChartRepoName: prometheus-community
      ChartRepoURL: https://prometheus-community.github.io/helm-charts
      DefaultNamespace: monitoring

  - name: monitoring-rules
    description: Custom PrometheusRule and ServiceMonitor resources
    type: KubernetesManifest
    dependsOn:
      - prometheus-operator
    properties:
      manifests:
        - file: ./manifests/prometheus-rules.yaml
        - file: ./manifests/service-monitors.yaml

  - name: grafana-dashboards
    description: Grafana dashboard ConfigMaps
    type: KubernetesManifest
    dependsOn:
      - prometheus-operator
    properties:
      manifests:
        - file: ./manifests/grafana-dashboards.yaml

Here monitoring-rules and grafana-dashboards both depend on prometheus-operator but are independent of each other, so they install in parallel once the operator is ready.

Example: diamond dependency

customAmenities:
  - name: shared-namespace
    type: KubernetesManifest
    properties:
      manifests:
        - def:
            apiVersion: v1
            kind: Namespace
            metadata:
              name: data-platform

  - name: kafka-operator
    type: Helm
    dependsOn:
      - shared-namespace
    properties:
      ChartName: strimzi-kafka-operator
      ChartVersion: 0.45.0
      ChartRepoName: strimzi
      ChartRepoURL: https://strimzi.io/charts/
      DefaultNamespace: data-platform

  - name: redis-operator
    type: Helm
    dependsOn:
      - shared-namespace
    properties:
      ChartName: redis-operator
      ChartVersion: 3.5.2
      ChartRepoName: ot-helm
      ChartRepoURL: https://ot-container-kit.github.io/helm-charts/
      DefaultNamespace: data-platform

  - name: data-pipeline
    type: Helm
    dependsOn:
      - kafka-operator
      - redis-operator
    properties:
      ChartName: my-data-pipeline
      ChartVersion: 1.0.0
      ChartRepoName: internal
      ChartRepoURL: https://charts.example.com
      DefaultNamespace: data-platform

Installation order: shared-namespacekafka-operator + redis-operator (parallel) → data-pipeline.

Failure and retry behavior

If a dependency fails to install, all amenities that depend on it also fail. You do not need to intervene manually in partial states — re-apply pending changes and the entire amenity tree is retried from scratch:

omnistrate-ctl deployment-cell apply-pending-changes --id hc-xxxxxxx

Use these patterns when working with amenity dependencies:

  • If strict ordering is required and the resources are tightly coupled, prefer a single Helm chart that owns both the prerequisite objects and the application resources.
  • If the resources have separate lifecycles (different upgrade cadences, different owners), use dependsOn to declare the ordering between separate amenities.
  • If several charts share the same registry credentials, keep the namespace and image pull secret as dedicated manifest amenities with dependsOn pointing from the charts to the secrets.
  • If the chart can create its own namespace, use DefaultNamespace and chart-native secret creation rather than a separate namespace amenity.

Debugging Amenities Sync

When an amenities rollout is stuck or a new template cannot be applied because a sync is still in progress, inspect the active deployment-cell workflows first:

omnistrate-ctl deployment-cell workflow list hc-xxxxxxx
omnistrate-ctl deployment-cell workflow events hc-xxxxxxx <workflow-id>

These commands show whether the cell is still in DEPLOYMENT_CELL_SYNC_AMENITIES, which step is currently active, and the last debug event emitted by the workflow.

If a previous amenities sync is blocking a new update:

  • If the sync appears as its own deployment-cell workflow in the UI, cancel that workflow there and rerun the update.
  • If the cell sync was triggered indirectly by another workflow, inspect the deployment-cell workflow list first. The parent workflow can finish or fail while the cell sync is still active.
  • If the workflow remains stuck and no cancel control is available, wait for the workflow timeout or contact support rather than layering more template changes on top of the same stuck sync.

If the workflow says a Helm amenity failed, connect to the deployment cell and inspect the releases directly:

omnistrate-ctl deployment-cell update-kubeconfig hc-xxxxxxx --role cluster-admin --kubeconfig /tmp/kubeconfig
export KUBECONFIG=/tmp/kubeconfig
helm ls -A
kubectl get pods -A

Use this flow when:

  • A sync is still running even though the releases appear installed
  • A reverted template still fails because a manifest or release is not reconciling cleanly
  • A new update-config-template attempt is blocked by an older sync that is still in progress

Cancel and Restart Amenity Operations

If an amenity deployment or update operation is taking too long or has encountered issues, you can cancel the operation and restart it.

Restart a failed amenity operation

If an amenity operation has failed, you can restart it by re-applying the pending changes:

omnistrate-ctl deployment-cell apply-pending-changes --id hc-xxxxxxx

This retries the deployment of any amenities that failed during the previous attempt.

Per-Environment Deployment Cells

Omnistrate supports configuring deployment cells on a per-environment basis. This allows you to have different deployment cell configurations and amenities for each environment (e.g., Dev, Staging, Production).

How it works

When you create deployment cells for different environments, each environment maintains its own set of deployment cells with independent configurations:

  • Dev environment: Minimal amenities, smaller node pools
  • Staging environment: Production-like amenities for testing
  • Production environment: Full amenity suite with high-availability configurations

Configuring amenities per environment

Each environment's deployment cells can have their own configuration templates. To set up environment-specific amenities:

  1. Generate a configuration template for the target environment's deployment cell.
  2. Customize the amenities for that environment.
  3. Apply the configuration to the deployment cell.
# List deployment cells
omnistrate-ctl deployment-cell list

# Generate a configuration template for a cloud provider
omnistrate-ctl deployment-cell generate-config-template --cloud aws

# Update amenities for a specific deployment cell
omnistrate-ctl deployment-cell update-config-template --id hc-xxxxxxx -f config.yaml

# Apply changes
omnistrate-ctl deployment-cell apply-pending-changes --id hc-xxxxxxx

Tip

Use configuration templates to maintain consistent amenity configurations across deployment cells within the same environment. Generate a template first to see the available options, then customize and apply it.