Skip to content

Shared File System

Why using a shared file system

In modern SaaS application deployments on Kubernetes, you often need to save data in a persistent volume and share it across different pods. For example, when setting up an AI model training system, multiple workers might need access to the same dataset stored in a persistent volume. This is where a shared file system is a good fit. Additionally, if your data volume is growing and you need to scale the volume size dynamically without downtime, a shared file system is an excellent solution for you.

Omnistrate allows your Resources to share data within the same deployment cluster by utilizing a shared file system. It provides the capability to easily share data across Resources and to scale your underlying storage without any downtime.

Configuring shared file system

To configure a shared file system using compose specification you can define the shared volume at the root level and then define the volume and mount for each resource.

Here is the example on how to define and mount a shared volume:

services: 
  serviceA:
    volumes:
      - source: file_system_data
        target: /var/data
        type: volume
        x-omnistrate-storage:
          aws:
            clusterStorageType: AWS::EFS
          gcp: 
            clusterStorageType: GCP::FILESTORE
          azure: 
            clusterStorageType: AZURE::FILE_SHARE

volumes:
  file_system_data:
    driver: sharedFileSystem
    driver_opts:
      # efs configuration
      efsThroughputMode: provisioned
      efsPerformanceMode: generalPurpose
      efsProvisionedThroughputInMibps: 100
      # filestore configuration
      filestoreCapacityGi: 1024
      filestoreTier: "BASIC_HDD"
      # azure fileshare configuration
      fileshareQuotaGi: 1024
      fileshareTier: Premium
      fileshareRedundancy: LRS

service volume properties:

  • source is the name of the volume you defined in the previous step.
  • target is the path where you want to mount the volume.
  • type specifies the type of the volume; in this case, it is volume.
  • x-omnistrate-storage defines the type of storage for each cloud provider.

volume properties:

  • file_system_data is the name of the volume, you can opt for a name of your choice.
  • driver: sharedFileSystem is the driver type required to enable shared file system functionality.
  • driver_opts defines various options to customize the shared file system configuration for different cloud providers:
  • efsThroughputMode: Specifies the throughput mode for AWS EFS. Can be set to bursting or provisioned. Use provisioned when you need consistent throughput performance. See AWS EFS Throughput Modes for details.
  • efsPerformanceMode: Defines the performance mode for AWS EFS. Options are generalPurpose (for latency-sensitive workloads) or maxIO (for higher aggregate throughput and operations per second). See AWS EFS Performance Modes for more information.
  • efsProvisionedThroughputInMibps: Sets the provisioned throughput in MiB/s for AWS EFS when using provisioned throughput mode. This value determines the baseline throughput performance. See AWS EFS Provisioned Throughput for details.
  • filestoreCapacityGi: Specifies the storage capacity for GCP Filestore in GiB. Minimum capacity requirements vary by tier: BASIC_HDD, ENTERPRISE, REGIONAL, and ZONAL require at least 1024 and BASIC_SSD requires at least 2560. For the most up-to-date details, see the GCP Filestore documentation.
  • filestoreTier: Defines the service tier for GCP Filestore. Options include BASIC_HDD, BASIC_SSD, ENTERPRISE, REGIONAL, or ZONAL depending on your performance and capacity requirements.
  • fileshareQuotaGi: Specifies the quota for Azure File Share in GiB. The quota must be between 1 GiB and 102400 GiB.
  • fileshareTier: Defines the performance tier for Azure File Share. Options include Standard or Premium.
  • fileshareRedundancy: Specifies the redundancy option for Azure File Share. Options include LRS (Locally Redundant Storage), ZRS (Zone-Redundant Storage), GRS (Geo-Redundant Storage), or GZRS (Geo-Zone-Redundant Storage) for Standard tier and LRS (Locally Redundant Storage) or ZRS (Zone-Redundant Storage) for Premium tier.

This configuration will provision a new Shared File System for each instance deployed.

Mounting a shared file volume in multiple resources

The same volume can be mounted in different pods:

services: 
  serviceA:
    volumes:
      - source: file_system_data
        target: /var/data
        type: volume
        x-omnistrate-storage:
          aws:
            clusterStorageType: AWS::EFS
          gcp: 
            clusterStorageType: GCP::FILESTORE
          azure: 
            clusterStorageType: AZURE::FILE_SHARE
  serviceB:
    volumes:
      - source: file_system_data
        target: /var/data
        type: volume
        x-omnistrate-storage:
          aws:
            clusterStorageType: AWS::EFS
          gcp: 
            clusterStorageType: GCP::FILESTORE
          azure: 
            clusterStorageType: AZURE::FILE_SHARE

volumes:
  file_system_data:
    driver: sharedFileSystem
    driver_opts:
      # efs configuration
      efsThroughputMode: provisioned
      efsPerformanceMode: generalPurpose
      efsProvisionedThroughputInMibps: 100
      # filestore configuration
      filestoreCapacityGi: 1024
      filestoreTier: "BASIC_HDD"
      # azure fileshare configuration
      fileshareQuotaGi: 1024
      fileshareTier: Premium
      fileshareRedundancy: LRS