Omnistrate Terraform Integration¶
The Omnistrate Terraform Integration streamlines the deployment and management of cloud infrastructure by combining the power of Terraform with the flexibility of Omnistrate. This integration allows you to inject dynamic metadata directly into your Terraform templates, automate complex multi-cloud deployments, and modularize infrastructure components for efficient management. By integrating with other deployment types such as Helm charts, Operators or Kustomize, Omnistrate enhances orchestration capabilities, enabling users to create scalable and adaptable service plans. With Omnistrate, you can simplify your infrastructure-as-code workflows and ensure smooth, automated provisioning across various cloud environments and tenants.
Warning
Support for Terraform Integration is currently in progress for GCP. Reach out to us at support@omnistrate.com to learn more.
Integrating Terraform on your Service Plan¶
Terraform stacks are managed through a specification file that defines your overall service topology on Omnistrate. A complete description of the service plan specification can be found on Getting started / Service Plan Spec
Here is an example of a using Terraform to configure a SaaS service:
provider "aws" {
region = "{{ $sys.deploymentCell.region }}"
}
# Create a Security Group for RDS and ElastiCache
resource "aws_security_group" "rds_elasticache_sg" {
name = "e2e-rds-elasticache-security-group-{{ $sys.id }}"
description = "Security group for RDS and ElastiCache instances"
vpc_id = "{{ $sys.deploymentCell.cloudProviderNetworkID }}"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Adjust for appropriate security
}
ingress {
from_port = 11211 # Default Memcached port
to_port = 11211
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Adjust accordingly
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Create a DB Subnet Group for RDS
resource "aws_db_subnet_group" "rds_subnet_group" {
name = "e2e-rds-subnet-group-{{ $sys.id }}"
description = "My RDS subnet group"
subnet_ids = [
"{{ $sys.deploymentCell.publicSubnetIDs[0].id }}",
"{{ $sys.deploymentCell.publicSubnetIDs[1].id }}",
"{{ $sys.deploymentCell.publicSubnetIDs[2].id }}"
]
}
# Create a Subnet Group for ElastiCache
resource "aws_elasticache_subnet_group" "elasticache_subnet_group" {
name = "e2e-elasticache-subnet-group-{{ $sys.id }}"
description = "My ElastiCache subnet group"
subnet_ids = [
"{{ $sys.deploymentCell.publicSubnetIDs[0].id }}",
"{{ $sys.deploymentCell.publicSubnetIDs[1].id }}",
"{{ $sys.deploymentCell.publicSubnetIDs[2].id }}"
]
}
# Create RDS instances
resource "aws_db_instance" "example1" {
identifier = "e2e-instance-1-{{ $sys.id }}"
engine = "mysql"
instance_class = "db.t3.micro"
allocated_storage = 20
db_subnet_group_name = aws_db_subnet_group.rds_subnet_group.name
vpc_security_group_ids = [aws_security_group.rds_elasticache_sg.id]
username = "admin"
password = "yourpassword"
parameter_group_name = "default.mysql8.0"
engine_version = "8.0.37"
skip_final_snapshot = true
depends_on = [
aws_security_group.rds_elasticache_sg,
aws_db_subnet_group.rds_subnet_group
]
}
resource "aws_db_instance" "example2" {
identifier = "e2e-instance-2-{{ $sys.id }}"
engine = "mysql"
instance_class = "db.t3.micro"
allocated_storage = 20
db_subnet_group_name = aws_db_subnet_group.rds_subnet_group.name
vpc_security_group_ids = [aws_security_group.rds_elasticache_sg.id]
username = "admin"
password = "yourpassword"
parameter_group_name = "default.mysql8.0"
engine_version = "8.0.37"
skip_final_snapshot = true
depends_on = [aws_db_instance.example1]
}
# Create ElastiCache Cluster for Memcached
resource "aws_elasticache_cluster" "example_memcached" {
cluster_id = "e2e-memcached-{{ $sys.id }}"
engine = "memcached"
node_type = "cache.t3.micro"
num_cache_nodes = 2 # Adjust as needed
subnet_group_name = aws_elasticache_subnet_group.elasticache_subnet_group.name
security_group_ids = [aws_security_group.rds_elasticache_sg.id]
depends_on = [
aws_db_instance.example2 # Ensure ElastiCache is created after all RDS instances
]
}
Warning
Please ensure that the main file for your terraform stack contains the definition on the provider.
Terraform templates need to be available on a GitHub repository, either private or public. One repository can contain multiple Terraform stacks, and can be referenced using a specific git reference (tag or branch) and a folder patch withing that repository. Omnistrate allows you to configure a reference to a Git repository and path where the terraform stack is stored. When selecting a path for your repository Omnistrate expects the entire terraform definition to be under that folder structure.
Within the Terraform templates, the Omnistrate platform provides system parameters that can be used to reference to information about the current cluster, for instance:
$sys.deploymentCell.publicSubnetIDs[i].id
$sys.deploymentCell.privateSubnetIDs[i].id
$sys.deploymentCell.region
$sys.deploymentCell.cloudProviderNetworkID
$sys.deploymentCell.cidrRange
You can inject these values into your Terraform templates, and these values will be dynamically rendered during deployment.
In addition, the Omnistrate platform will automatically output all values defined in the Terraform templates' output block. You can use these outputs to inject into other deployment components, as shown in the example below:
Info
You can use system parameters to customize Terraform templates. A detailed list of system parameters be found on Build Guides / System Parameters.
Registering a Service using a Terraform stack¶
Terraform stack are managed through a specification file that defines your overall service topology on Omnistrate. A complete description of the service plan specification can be found on Getting started / Service Plan Spec. Consider using Git tags to version the terraform stack and ensure consistency across Service Plan versions.
Here is an example of a SaaS service that deploys Redis Clusters using a Helm chart.
Example (Kustomize + Terraform combined deployment):
name: Multiple Resources
deployment:
byoaDeployment:
AwsAccountId: "<AWS_ID>"
AwsBootstrapRoleAccountArn: arn:aws:iam::<AWS_ID>:role/omnistrate-bootstrap-role
GcpProjectId: "<GCP_INFO>"
GcpProjectNumber: "<GCP_INFO>"
GcpServiceAccountEmail: "<GCP_INFO>"
services:
- name: terraformChild
internal: true
terraformConfigurations:
configurationPerCloudProvider:
aws:
terraformPath: /terraform
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
gcp:
terraformPath: /terraform
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
- name: kustomizeRoot
type: kustomize
dependsOn:
- terraformChild
compute:
instanceTypes:
- name: t4g.small
cloudProvider: aws
- name: e2-medium
cloudProvider: gcp
network:
ports:
- 5432
kustomizeConfiguration:
kustomizePath: /correct
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
apiParameters:
- key: username
description: Username
name: Username
type: String
modifiable: true
required: false
export: true
defaultValue: username
- key: password
description: Default DB Password
name: Password
type: String
modifiable: false
required: false
export: false
defaultValue: postgres
You can register this spec using our CLI:
omnistrate-ctl build -f spec.yaml --name 'Multiple Resources' --release-as-preferred --spec-type ServicePlanSpec
# Example output shown below
✓ Successfully built service
Check the service plan result at: https://omnistrate.cloud/product-tier?serviceId=s-xxx&environmentId=se-xxx
Access your SaaS offer at: https://saasportal.instance-xxx.hc-xxx.us-east-2.aws.f2e0a955bb84.cloud/service-plans?serviceId=s-xxxx&environmentId=se-xxx
Using Terraform outputs¶
Omnistrate platform will automatically output all values defined in the Terraform templates output block. You can use these outputs to inject into other deployment components.
For example, a resource can define output parameters on the stack
output "rds_endpoints_1" {
value = aws_db_instance.example1.endpoint
}
output "rds_endpoints_2" {
value = {
endpoint: aws_db_instance.example2.endpoint
}
sensitive = true
}
output "elasticache_endpoint" {
value = aws_elasticache_cluster.example_memcached.cache_nodes[0].address
}
and in the dependent resource we can reference to the terraform output properties:
{{ $terraformChild.out.rds_endpoints_1 }}
{{ $terraformChild.out.rds_endpoints_2.endpoint }}
{{ $terraformChild.out.elasticache_endpoint }}
Using Terraform private module¶
Omnistrate supports the use of private Terraform modules hosted in private repositories. You can configure these modules in your Terraform scripts with the following structure:
module "ec2_instance" {
source = "git::https://{{ $sys.deployment.terraformPrivateModuleGitAccessTokens.token }}@github.com/terraform-aws-modules/terraform-aws-ec2-instance"
instance_type = "t2.micro"
}
deployment:
hostedDeployment:
AwsAccountId: "<AWS_ID>"
AwsBootstrapRoleAccountArn: arn:aws:iam::<AWS_ID>:role/omnistrate-bootstrap-role
GcpProjectId: "<GCP_INFO>"
GcpProjectNumber: "<GCP_INFO>"
GcpServiceAccountEmail: "<GCP_INFO>"
services:
- name: terraform
internal: true
terraformConfigurations:
configurationPerCloudProvider:
aws:
terraformPath: /terraform
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
privateModuleGitAccessTokens:
token: "<<PAT>>"
terraformExecutionIdentity: "arn:aws:iam::<AWS_ID>:role/custom-terraform-execution-role"
Additional permissions for Terraform¶
Using a terraform stack normally requires to define permissions to create, update and delete the entities involved. You can configure a custom policy / role that will be used when applying each Terraform stack defined as a resource in Omnistrate.
Custom Terraform policy for BYOA hosting model¶
To configure permissions required to provision Terraform resources within BYOA hosting model, you can configure a Custom Terraform Policy. This feature is configured on the service plan level and Omnistrate will adjust onboarding scripts for your customers to include appropriate resources (depending on cloud provider). - For AWS, configuration is provider as policy document. Omnistrate will create dedicated AWS IAM role per-model when onboarding your customers. With each update, new version of cloud formation stack will be created. - For GCP, configuration is provided as list of GCP IAM roles. Omnistrate will create dedicated GCP IAM service account per-role with specified roles bound. GCP CLI commands will always reflect the latest configuration. During provisioning, this custom identity (AWS IAM role / GCP IAM service account) is used to modify Terraform resources within the service plan.
To configure custom terraform policy using the service spec file, add the "CUSTOM_TERRAFORM_POLICY" feature such as:
features:
CUSTOM_TERRAFORM_POLICY:
policies:
aws: |
{
"Statement": [
{
"Action": [
"sqs:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
roles:
gcp:
- name: roles/pubsub.admin
- name: roles/cloudsql.admin
In the example above, terraform will be allowed full access to SQS service.
To achieve the same effect using Omnistrate CTL, you can use the following command:
omctl service-plan enable-feature 'Service name' 'Plan name' --feature CUSTOM_TERRAFORM_POLICY --feature-configuration-file ../path/to/config-file
With config file having the following content (AWS policy itself needs to be provided as JSON string):
{
"policies": {
"aws": "{\"Statement\":[{\"Action\": [\"sqs:*\"],\"Effect\": \"Allow\",\"Resource\": \"*\"}]}"
},
"roles": {
"gcp": [
{"name": "roles/pubsub.admin"},
{"name": "roles/cloudsql.admin"}
]
}
}
To disable this feature, use the following command:
Warning
Each time the custom terraform policy is updated, a new version of the onboarding script is generated. Your customers might need to re-run their account onboarding, otherwise they might not be able to provision your service. This requires updating their account onboarding CloudFormation stack (for AWS), or getting latest GCP CLI commands and running them (for GCP).
Custom Terraform execution identity for Service Provider hosted model¶
You can configure execution identity for Terraform resources by providing a reference in the terraformExecutionIdentity
property. Following example shows usage of AWS IAM role omnistrate-custom-terraform-execution-role
and GCP service account omnistrate-terraform-service-account
:
name: Multiple Resources
deployment:
hostedDeployment:
AwsAccountId: "<AWS_ID>"
AwsBootstrapRoleAccountArn: arn:aws:iam::<AWS_ID>:role/omnistrate-bootstrap-role
GcpProjectId: "<GCP_INFO>"
GcpProjectNumber: "<GCP_INFO>"
GcpServiceAccountEmail: "<GCP_INFO>"
services:
- name: terraform
internal: true
terraformConfigurations:
configurationPerCloudProvider:
aws:
terraformPath: /terraform
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
terraformExecutionIdentity: "arn:aws:iam::<AWS_ID>:role/omnistrate-custom-terraform-execution-role"
gcp:
terraformPath: /terraform
gitConfiguration:
reference: refs/tags/12.0
repositoryUrl: https://github.com/omnistrate-community/sample/TestKustomizeTemplate.git
terraformExecutionIdentity: "omnistrate-terraform-service-account@<GCP_PROJECT_ID>.iam.gserviceaccount.com"
Custom principal¶
Any AWS IAM role / GCP service account you create yourself.
AWS requirements for Terraform role¶
The only requirement for AWS role is a Trusted Entity as defined below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ID>:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:PrincipalArn": "arn:aws:iam::<AWS_ID>:role/dataplane-agent-iam-role-*"
}
}
}
]
}
Info
IAM role must use the prefix 'omnistrate-' so that it can be used for Terraform execution in Omnistrate.
GCP requirements for Terraform service account¶
GCP service account needs to bind member serviceAccount:${data.google_project.current.project_id}.svc.id.goog[dataplane-agent/omnistrate-da-${lower(var.account_config_identity_id)}]
to role roles/iam.workloadIdentityUser
in order for Omnistrate dataplane to be able to use it.
Pre-created principals¶
Omnistrate will by default pre-create empty IAM principals for Terraform resources during account setup.
- For AWS, IAM role omnistrate-terraform-role
will be created (this role start with omnistrate-
and has a trusted entity configuration)
- For GCP, IAM service account omnistrate-tf-${lower(var.account_config_identity_id)}
will be created (this service account will have IAM binding to Omnistrate dataplane)
These principals are pre-created with necessary configuration (as described above). You can configure them with necessary permissions and pass reference as terraformExecutionIdentity
in the service spec file.