The nerdier crew members at Proper like to automate our code deployments as much as possible. Recently, we’ve switched to serving a number of our client sites on a DigitalOcean managed Kubernetes cluster. For us, this is great – DigitalOcean manage the nitty-gritty of keeping the servers and cluster running, and all we’ve got to do is deploy code in working Docker containers.

This works really well, but one step was missing – getting Kubernetes to re-deploy the service in response to a Docker image being built on Bitbucket Pipelines, our chosen CI tool.

There are a number of guides out there on how to get Kubernetes to re-deploy in response to a new image (this one on tueri.io was the most clear, in my opinion, and most of this blog post is based on their work), however, none of them talk explicitly about deploying to a DigitalOcean managed cluster. There’s an important difference between DigitalOcean and a DIY clusters: the $KUBECONFIG that Tueri store in their Bitbucket secrets expires every seven days when generated using a DigitalOcean managed cluster. This isn’t great for us as it would mean, once per week, going into Bitbucket Pipelines settings to update the $KUBECONFIG environment variable.

Fortunately, DigitalOcean’s command line tool doctl can be used to generate Kubernetes config strings, and crucially, doctl‘s configuration doesn’t expire after a week.

With all that said, setting this up is pretty simple. First, we need an API token from DigitalOcean. Make sure that you generate a new one rather than use an existing one so that, if for any reason, your Bitbucket credentials are compromised, you can revoke just this token, rather than affecting any other services that your team has set up.

Then, take that generated token and add it as an environment variable to Bitbucket Pipelines as DOCTL_TOKEN. You can add it on a repo-by-repo basis; however, because we work on dozens of client sites, I find that it’s more useful to add as a team environment variable that is accessible on all repos within your team.

Then, we add a bitbucket-pipelines.yml file like this:

image: atlassian/default-image:2
pipelines:
  prod/*:
    - step:
        name: Docker build prod
        services:
          - docker
        caches:
          - docker
        script: # Modify the commands below to build your repository.
          # build the Docker image (this will use the Dockerfile in the root of the repo)
          - docker build --memory=3072M -t properdesign/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT -t properdesign/$BITBUCKET_REPO_SLUG:latest .
          # authenticate with the Docker Hub registry
          - docker login -u $DOCKER_HUB_USER -p $DOCKER_HUB_PASSWORD
          # push the new Docker image to the Docker registry
          - docker push properdesign/$BITBUCKET_REPO_SLUG:latest
    - step:
        name: Deploy to Kubernetes
        image: atlassian/pipelines-kubectl
        script:
          # Download and install `doctl` so that we can refresh configs for k8s
          - apk --no-cache add curl
          - curl -sL https://github.com/digitalocean/doctl/releases/download/v1.27.0/doctl-1.27.0-linux-amd64.tar.gz | tar -xzv
          - mv ./doctl /usr/local/bin
          # Get the config file
          - doctl -t $DOCTL_TOKEN k8s cluster kubeconfig show proper-prod > kubeconfig.yml
          - kubectl --insecure-skip-tls-verify --kubeconfig=kubeconfig.yml set image deployment/${BITBUCKET_REPO_SLUG} ${BITBUCKET_REPO_SLUG}=properdesign/$BITBUCKET_REPO_SLUG:latest

The first step simply builds a Docker image and pushes it to our Docker Hub team account. The second step is what’s interesting here. The atlassian/pipelines-kubectl has kubectl installed, but is based on Alpine and doesn’t have doctl. So, we install doctl and then use it to generate a kubeconfig.yml file for kubectl to use.

It’s all a bit verbose, but doesn’t need any project-by-project configuration and, more importantly, doesn’t need the tokens manually refreshing once per week.