Creating and setting up our Environment repository – Continuous Deployment/ Delivery with Argo CD-2

It creates two resources – a service account and a three-node GKE instance that uses the service account with the cloud platform OAuth scope.

We name the service account with a combination of the cluster_name and branch variables. This is necessary as we need to distinguish clusters between environments. So, if the cluster name is mdo-cluster and the Git branch is dev, we will have a service account called gke-mdo-cluster-dev-sa. We will use the same naming convention on the GKE cluster. Therefore, the cluster’s name would be mdo-cluster-dev.

We have a provider.tf file that contains the provider and backend configuration. We’re using a remote backend here as we want to persist the Terraform state remotely. In this scenario, we will use a Google Cloud Storage (GCS) bucket. The provider.tf file looks like this:
provider “google” {
project = var.project_id
region = “us-central1”
zone = “us-central1-c”
}
terraform {
backend “gcs” {
prefix = “mdo-terraform”
}
}

Here, we’ve specified our default region and zone within the provider config. Additionally, we’ve declared the gcs backend, which only contains the prefix attribute with a value of mdo-terraform. We can separate configurations using the prefixes to store multiple Terraform states in a single bucket. We have purposefully not supplied the bucket name, which we will do at runtime using -backend-config during terraform init. The bucket name will be tf-state-mdo-terraform-.

Tip

As GCS buckets should have a globally unique name, it is good to use something such as tf-state-mdo-terraform- as the project ID is globally unique.

We also have the variables.tf file, which declares the project_id, branch, cluster_name, and location variables, as follows:
variable project_id {}
variable branch {…
default = “dev”
}
variable cluster_name {…
default = “mdo-cluster”
}
variable “location” {…
default = “us-central1-a”
}

Now that we have the Terraform configuration ready, we need a workflow file that can be applied to our GCP project. For that, we’ve created the following GitHub Actions workflow file – that is, .github/ workflows/create-cluster.yml:
name: Create Kubernetes Cluster
on: push
jobs:
deploy-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
uses: actions/checkout@v2
name: Install Terraform
id: install-terraform
run: wget -O terraform.zip https://releases.hashicorp.com/terraform/1.5.5/ terraform_1.5.5_linux_amd64.zip && unzip terraform.zip && chmod +x terraform && sudo mv terraform /usr/local/bin
name: Apply Terraform
id: apply-terraform
run: terraform init -backend-config=”bucket=tf-state-mdo-terraform-${{ secrets. PROJECT_ID }}” && terraform workspace select ${GITHUB_REF##/} || terraform workspace new ${GITHUB_REF##/} && terraform apply -auto-approve -var=”project_id=${{ secrets.PROJECT_ ID }}” -var=”branch=${GITHUB_REF##*/}”
env:
GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}

This is atwo-step build file. The first step installs Terraform, while the second step applies the Terraform configuration. Apart from that, we’ve specified ./terraform as the working directory at the global level. Additionally, we’re using a few secrets in this file, namely GCP_CREDENTIALS, which is the key file of the service account that Terraform uses to authenticate and authorize the GCP API, and the Google Cloud PROJECT_ID.

We’ve also supplied the bucket name as tf-state-mdo-terraform-${{ secrets.
PROJECT_ID }} to ensure that we have a unique bucket name.

Leave a Reply

Your email address will not be published. Required fields are marked *



          Terms of Use | About Breannaworld | Privacy Policy | Cookies | Accessibility Help | Contact Breannaworld