3sky's notes

Minimal blog about IT

Go app on Kubernetes from scrach

2020-08-20 6 min read 3sky

Welcome

I like GitHub Actions, I like Kubernetes and I want to learn more about Helm. So maybe I should join these tools and make a smooth pipeline? Why not? Also, I switched to Fedora, and that’s a great moment to checkout Podman in action. No time to wait, let’s go.

Tools used in this episode

  • GitHub Action
  • Podman
  • Kubernetes
  • Terraform
  • Helm
  • GCP
  • A bit of Golang :)

Build the app

The first step is building a small app. I decided to use Golang because it’s an awesome language for microservices and testing is clear.

  1. Create a directory for app and infra part

    mkdir -pv app infra
    
  2. Go to the app directory and create main.go

    package main
    
    import "fmt"
    
    func main() {
       fmt.Println("Hello World!")
    }
    
  3. Init go mod

    go mod init 3sky/k8s-app
    

Write code

I decided to use Echo framework, I like it, it’s fast and logger is easy to use. \

App has two endpoint:

  • /hello - which return Hello World!

  • /status - which retrun app status = OK

    package main
    
    import (
        "net/http"
        "time"
        "github.com/labstack/echo/v4"
        "github.com/labstack/echo/v4/middleware"
    )
    // Greetings ...
    type Greetings struct {
        Greet string    `json:"greet"`
        Date  time.Time `json:"date"`
    }
    // Status ...
    type Status struct {
        Status string `json:"status"`
    }
    func main() {
         // Echo instance
         e := echo.New()
        // Middleware
        e.Use(middleware.LoggerWithConfig(middleware.LoggerConfig{
           Format: "method=${method}, uri=${uri}, status=${status}\n",
        }))
        e.Use(middleware.Recover())
        // Routes
        e.GET("/hello", HelloHandler)
        e.GET("/status", StatusHandler)
        // Start server
        e.Logger.Fatal(e.Start(":1323"))
    }
    // HelloHandler ...
    func HelloHandler(c echo.Context) error {
        return c.JSON(http.StatusOK, &Greetings{Greet: "Hello, World!", Date: time.Now()})
    }
    // StatusHandler ...
    func StatusHandler(c echo.Context) error {
        return c.JSON(http.StatusOK, &Status{Status: "OK"})
    }
    
  1. Download dependences

    go mod tidy
    
  2. Run the code

    go run main.go
    
  3. Add some basic tests

    package main
    
    import (
       "encoding/json"
       "net/http"
       "net/http/httptest"
       "testing"
       "github.com/labstack/echo/v4"
    )
    var (
        g = Greetings{}
        s = Status{}
    )
    func TestGreetings(t *testing.T) {
       e := echo.New()
       req := httptest.NewRequest(http.MethodGet, "/", nil)
       rec := httptest.NewRecorder()
       c := e.NewContext(req, rec)
       HelloHandler(c)
       if rec.Code != 200 {
           t.Errorf("Expected status code is %d, but it was %d instead.", http.StatusOK, rec.Code)
       }
       json.NewDecoder(rec.Body).Decode(&g)
       if g.Greet != "Hello, World!" {
           t.Errorf("Expected value is \"Hello, World!\", but it was %s instead.", g.Greet)
       }
    }
    func TestStatus(t *testing.T) {
       e := echo.New()
       req := httptest.NewRequest(http.MethodGet, "/status", nil)
       rec := httptest.NewRecorder()
       c := e.NewContext(req, rec)
       StatusHandler(c)
       if rec.Code != 200 {
           t.Errorf("Expected status code is %d, but it was %d instead.", http.StatusOK, rec.Code)
       }
       json.NewDecoder(rec.Body).Decode(&s)
       if s.Status != "OK" {
           t.Errorf("Expected value is \"OK\", but it was %s instead.", s.Status)
       }
    }
    
    
  4. Run tests

    go test ./...
    

Containerization with Podman

We need to pack out an awesome app. To do that I decided to use Podman. What Podman is? It is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Unfortunately, I prefer creating Dockerfile in Docker’s way, Buildah is not for me at least now.

Create contianer

  1. Create Dockerfile

    # Dockerfile
    FROM golang:alpine as builder
    RUN apk add --no-cache git gcc libc-dev
    WORKDIR /build/app
    # Get depedences
    COPY go.mod ./
    RUN go mod download
    # Run Testss
    COPY . ./
    RUN go test -v ./...
    # Build app
    RUN go build -o myapp
    FROM alpine
    COPY --from=builder /build/app/myapp ./myapp
    EXPOSE 1323
    CMD ["./myapp"]
    
  2. Build an image

    podman build -t k8s-app .
    
  3. Run image

    podman run -d -p 8080:1323 k8s-app:latest
    
  4. Run basic curl’s test

    curl -s localhost:8080/status | jq .
    curl -s localhost:8080 | jq .
    

Configure GCP

OK, we have working app now we need to create our Kubernetes cluster for our deployment.

Working with GCP

  1. Auth into GCP

    gcloud auth login
    
  2. Create a new project

    gcloud projects create [PROJECT_ID] --enable-cloud-apis
    
    # --enable-cloud-apis
    # enable cloudapis.googleapis.com during creation
    # example
    # gcloud projects create calcium-hobgoblins --enable-cloud-apis
    
  3. Check existing projects

    gcloud projects list
    PROJECT_ID               NAME                     PROJECT_NUMBER
    calcium-hobgoblins       calcium-hobgoblins       xxxx
    
  4. Set gcloud project

    gcloud config set project calcium-hobgoblins
    
  5. Create a service account and add necessary permission

    gcloud iam service-accounts create calcium-hobgoblins-user \
    --description "Service user for GKE and GitHub Action" \
    --display-name "calcium-hobgoblins-user"
    
    gcloud projects add-iam-policy-binding calcium-hobgoblins --member \
    serviceAccount:calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com \
    --role roles/compute.admin
    
    gcloud projects add-iam-policy-binding calcium-hobgoblins --member \
    serviceAccount:calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com \
    --role roles/storage.admin
    
    gcloud projects add-iam-policy-binding creeping-hobgoblins --member \
    serviceAccount:calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com \
    --role roles/container.admin
    
    gcloud projects add-iam-policy-binding calcium-hobgoblins --member \
    serviceAccount:calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com \
    --role roles/iam.serviceAccountUser
    
  6. List permission calcium-hobgoblins

    gcloud projects get-iam-policy calcium-hobgoblins  \
    --flatten="bindings[].members" \
    --format='table(bindings.role)' \
    --filter="bindings.members:calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com"
    

Push initial image to container registry

After setting up cloud project we have finally access to the container registry.

Auth and Push

  1. Authenticate container registry

    gcloud auth activate-service-account \
    calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com \
    --key-file=/home/kuba/.gcp/calcium-hobgoblins.json
    
    gcloud auth print-access-token | podman login \
    -u oauth2accesstoken \
    --password-stdin https://gcr.io
    
  2. Push image into gcr.io

    podman push localhostk8s-app:latest docker://gcr.io/calcium-hobgoblins/k8s-app:0.0.1
    

Provide Kubernetes Cluster

After setting up our GCP’s project we need to provision out K8S cluster.

  1. Create auth file

    mkdir -pv ~/.gcp
    cloud iam service-accounts keys create ~/.gcp/calcium-hobgoblins.json \
    --iam-account calcium-hobgoblins-user@calcium-hobgoblins.iam.gserviceaccount.com
    
  2. Create a basic directory structure

    cd ../infra
    mkdir -pv DEV Module/GKE
    
  3. Terraform directory structure looks like that:

    .
    ├── DEV
    │   ├── main.tf
    │   └── variables.tf
    └── Module
        └── GKE
            ├── main.tf
            └── variables.tf
    
  4. Init Terrafrom

    cd DEV
    terraform init
    
  5. Permission are important

    If we forget about devstorage out cluster will have a problem with pulling images…

    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/devstorage.read_only"
    ]
    
  6. Terraform apply

    terraform apply -var="path=~/.gcp/calcium-hobgoblins.json"
    
  7. Config kubectl

    export cls_name=my-gke-cluster
    export cls_zone=europe-west3-a
    gcloud container clusters list
    gcloud container clusters get-credentials cls_name --zone cls_zone
    kubectl get node
    

Prepare Helm release

When we have a working cluster, we can prepare helm chart. Also, it’s a good time to install an ingress controller.

  1. Init example helm chart

    cd ../..
    mkdir helm-chart
    cd helm-chart
    helm create k8s-app
    
  2. Install Ingress with Helm(nginx)

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm install release ingress-nginx/ingress-nginx
    

Add GitHub Action Pipeline

As an easy and great CI tool I decided to use GitHub Action again.

  1. Add two files

    mkdir -pv .github/workflows
    touch .github/workflows/no-release.yml
    touch .github/workflows/release.yml
    
  2. Add content to no-release file

    This file will execute every time when code will be pushed to the repository.

  3. Add content to release file

    This file will execute only when pushed code will be tagged with v* expression.

  4. Set GH Secrets

    PROJECT_ID - it’s project - calcium-hobgoblins GCP_SA_KEY - auth file in base64

    cat ~/.gcp/calcium-hobgoblins.json | base64
    
  5. Push some code into the repo

    git push origin master
    git push origin v.0.0.1
    
  6. Check the status of pods

    kubectl get pods
    kubectl describe pod <pod-name>
    helm list release-k8s-app
    

Summary

As you can see there is no source file for terraform and helm. I decided for that move because the post is long enough even without it :)
What else? I like Podman it just works without root permission on the host. I still have some problems with Buildah, it’s a bit uncomfortable for me. Maybe in the future, or after another attempt.
Setting K8S cluster is easy with Terraform, but If we are planning production deployment all factors become more complicated.
Helm also looks like a nice tool in case a lot of similar deployment, also tracking release history is a cool feature. Unfortunately, it’s not a magic tool and doesn’t resolve all our CI/CD problems.

All code you can find here