Kind is a tool for running a local kubernetes cluster where docker containers act as nodes. This is a great way to get started learning and using kubernetes before relying on a cloud provider. It nicely integrates with various tools, is thoroughly documented, and supports mutlti-node clusters. We will be using Terraform to provision a kind cluster locally and using Helm to deploy ArgoCD via Terraform as well. Let’s get started!

We will be using a Python application that uses an external API to receive animal facts. If you want to see how to connect multiple docker services together, you can go and check it out! Let’s start by getting Kind setup.

Terraform Setup

First make sure that you have Terraform installed, which can be found here. You can then clone the repository tf-kind-argocd or create the Terraform files yourself. Once you are in the tf-kind-argocd directory in your terminal, you can run:

terraform init

This will initialize the workspace for it to be able to apply the Terraform resources. It will first initialize the backend, and since this was not specified, it will default to the local backend. It will then download the necessary modules - we only have one and it is a remote module, so it downloads it from the public Terraform Registry. The config for that can be found here.

Finally, Terraform downloads the providers required by your configuration. Since the configuration does not have a lock file, Terraform will download the providers specified in the required_providers block found in the providers.tf file.

Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching "2.31.0"...
- Finding hashicorp/helm versions matching "2.14.1"...
- Finding tehcyx/kind versions matching "0.5.1"...
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing tehcyx/kind v0.5.1...

Next, Terraform creates the lock file if it does not already exist, or updates it if necessary.

Terraform’s lock file, .terraform.lock.hcl, records the versions and hashes of the providers used in this run. This ensures consistent Terraform runs in different environments, since Terraform will download the versions recorded in the lock file for future runs by default.

We will then run:

terraform plan

This will show what is expected to be created when we apply the Terraform. As mentioned, Terraform uses the provider versions specified in the providers.tf file.

Finally, we can run the following command to create the resources:

terraform apply

This creates the changes defined by your plan to create, update, or destroy resources. When you apply changes to your infrastructure, Terraform uses the providers and modules installed during initialization to execute the steps stored in the plan from the previous command.

kind_cluster.default: Creating...
kind_cluster.default: Still creating... [10s elapsed]
kind_cluster.default: Still creating... [20s elapsed]
kind_cluster.default: Creation complete after 28s [id=kind-cluster-]
kubernetes_namespace.argocd: Creating...
kubernetes_namespace.argocd: Creation complete after 0s [id=argocd]
helm_release.argocd_helm: Creating...
helm_release.argocd_helm: Still creating... [10s elapsed]
helm_release.argocd_helm: Still creating... [20s elapsed]
helm_release.argocd_helm: Still creating... [30s elapsed]
helm_release.argocd_helm: Still creating... [40s elapsed]
helm_release.argocd_helm: Creation complete after 40s [id=argocd]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Continue onto the next section for an explanation of what we created.

Terraform Resources

With Terraform, we created:

  1. Kind cluster - Local k8s cluster
    • Creates a k8s cluster and has the provider waiting for the control plane to be ready
  2. K8s namespace for argocd
    • Depends on the Kind cluster being created
  3. ArgoCD
    • Created using Helm using the ArgoCD Helm chart
    • Depends on the k8s namespace being created
    • Uses a values file from within the repo called values.yaml
      • Those values are for inserting into the Helm chart and allowing an insecure connection to the ArgoCD server

It would be ideal if the creation of the cluster and the other resources were in separate repos but since this is just an example, we have them in only one.

Set Up ArgoCD and Application

First, let’s take a look at our installed ArgoCD server, we will want to port-forward the service to be able to login:

kubectl port-forward -n argocd svc/argocd-server 8080:80

Now go to localhost:8080 - you will see a warning but just continue on and the login screen will appear.

argocd login screen

The username is admin and you can get the password from this command:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Let’s login to the CLI as well, since we will need it later on:

argocd login localhost:8080 --username admin --password <output-from-previous-command>

Now we can get the application ready to be used for k8s:

Create a local docker image for the application and from within this repo in the terminal, use the command:

docker build -t animal-facts:1.0 .

Then load the image into the cluster with:

kind load docker-image animal-facts:1.0 --name kind-cluster

We will use this docker image as the image for the deployment.

In the application repo, you will see that there is an argocd directory, which has argocd-application.yaml file and a kustomize directory. The file has the Application config which will be used to deploy the app to ArgoCD, you can read more about it here. The kustomize directory is where the ArgoCD Application file sources the config files for the deployment. As we can see from this section in the file:

spec:
  source:
    path: argocd/kustomize
    repoURL: git@github.com:<your-github-repo>/animalFacts.git
    targetRevision: HEAD

To be able to pull from the github repo, we will need to add it to ArgoCD, we can do that with:

argocd repo add git@github.com:<your-github-repo>/animalFacts.git --insecure-ignore-host-key --ssh-private-key-path ~/.ssh/id_ed25519

Ensure that you have an ssh key created and setup for your github account, the steps for that can be found here

Deploy Application

To deploy the app, go into the app directory and from there, run:

kubectl apply -f argocd/argocd-application.yaml

You should be able to see the application in ArgoCD now 🥳

argocd app