Skip to content

Opstella Workers

🟢 Management

Opstella Workers are Microservices to integrate with DevSecOps Tools, Observability Tools, Keycloak, and Kubernetes.

As you have determined number of tools/instruments in Software Resources Preparation/Determine List of Opstella Workers Section

Number of Opstella Workers will depend on DevSecOps Tools, Observability Tools enabled within the entire platform as it could be vary by your requirement.

For Example:

Hard Requirement: Opstella Keycloak Service - Install worker-keycloak

Hard Requirement: Kubernetes - Install worker-kubernetes

DevSecOps:

  • ArgoCD: Install worker-argocd
  • DefectDojo: Install worker-defectdojo
  • GitLab: Install worker-gitlab
  • Headlamp: Install worker-headlamp
  • Harbor: Install worker-harbor
  • SonarQube: Install worker-sonarqube
  • Vault: Install worker-vault

Observability:

  • Grafana LGTM Stack: Install worker-grafana, worker-loki, worker-tempo

In conclusion, this is the list of Opstella Workers that required to install

  • worker-keycloak
  • worker-kubernetes
  • worker-argocd
    • Due to Opstella Architecture and Kubernetes Clusters requirements that we followed the Reference Architecture, it must have worker-argocd for 🟦 Non-Production Workload and 🟥 Production Workload.
      • worker-argocd-nonprod
      • worker-argocd-prod
  • worker-defectdojo
  • worker-gitlab
  • worker-headlamp
  • worker-harbor
  • worker-sonarqube
  • worker-vault
  • worker-grafana
  • worker-loki
  • worker-tempo

List of Opstella Workers will be

#!/bin/bash
export OPSTELLA_ENABLED_INSTRUMENTS=(keycloak kubernetes argocd-nonprod argocd-prod defectdojo gitlab headlamp-nonprod headlamp-prod harbor sonarqube vault grafana loki tempo)

  1. Connect to 🟢 Management Kubernetes Cluster ; i.e w/ Kubeconfig File

    Ensure you have defined and loaded your Global Shell Variables as described in Shell Variables.

    Terminal window
    source $HOME/opstella-installation/shell-values/kubernetes/management_cluster.vars.sh
    Terminal window
    export KUBECONFIG="$HOME/opstella-installation/kubeconfigs/management_cluster.yaml"

    Ensure BASE_DOMAIN is defined as per the Shell Variables guide.

  2. Helm Values Preparation

    Important Configurations

    • image.repository, image.tag: Set your Container Image Location, Version
    export OPSTELLA_REGISTRY="asia-southeast1-docker.pkg.dev/opstella/platform"
    export OPSTELLA_WORKER_VERSION=vX.Y.Z ## CHANGEME
    Terminal window
    for KEY in "${OPSTELLA_ENABLED_INSTRUMENTS[@]}"
    do
    if echo $KEY | grep -e "^argocd"; then
    export WORKER_NAME=$KEY
    export IMAGE_NAME=argocd
    elif echo $KEY | grep -e "^headlamp"; then
    export WORKER_NAME=$KEY
    export IMAGE_NAME=kubernetes
    elif echo $KEY | grep -e "^kubernetes"; then
    export WORKER_NAME=$KEY
    export IMAGE_NAME=kubernetes
    else
    export WORKER_NAME=$KEY
    export IMAGE_NAME=$KEY
    fi
    cat <<EOF >> $HOME/opstella-installation/helm-values/opstella-worker-${WORKER_NAME}-full-values.yaml
    image:
    repository: ${OPSTELLA_REGISTRY}/worker-${IMAGE_NAME}
    tag: ${OPSTELLA_WORKER_VERSION}
    pullPolicy: Always
    nameOverride: worker-${WORKER_NAME}
    fullnameOverride: worker-${WORKER_NAME}
    serviceAccount:
    name:
    imagePullSecrets:
    - name: registry-secret
    env:
    - name: WORKER_NAME
    value: ${WORKER_NAME}
    - name: INVOKE_URL
    value: http://localhost:3500/v1.0/invoke/opstella-core/method
    - name: STATE_STORE_URL
    value: http://localhost:3500/v1.0/state/statestore
    - name: INVOKE_URL_WORKER
    value: http://localhost:3500/v1.0/invoke
    containerPorts: 3000
    service:
    port: 3000
    # Waiting time in seconds for shutting down pod after sent SIGTERM
    # terminationGracePeriodSeconds: 30
    healthCheck:
    enabled: true
    liveness:
    httpGet:
    path: "/healthcheck"
    port: 3000
    initialDelaySeconds: 180
    periodSeconds: 30
    readiness:
    httpGet:
    path: "/"
    port: 3000
    initialDelaySeconds: 10
    periodSeconds: 5
    podAnnotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "${WORKER_NAME}"
    dapr.io/app-port: "3000"
    dapr.io/enable-api-logging: "true"
    dapr.io/config: "config"
    dapr.io/sidecar-seccomp-profile-type: "RuntimeDefault"
    podSecurityContext:
    fsGroup: 1000
    securityContext:
    seccompProfile:
    type: RuntimeDefault
    capabilities:
    drop: ["ALL"]
    runAsNonRoot: true
    privileged: false
    allowPrivilegeEscalation: false
    runAsGroup: 1000
    runAsUser: 1000
    EOF
    done

    So that’s why worker-argocd are needed to deploy a separate instances as

    • worker-argocd-nonprod
    • worker-argocd-prod

    However, its container image can be shared.

    In the previous Helm Values Configuration section, we have intented to ease the operation but mistakenly created the incorrect configuration. This step will fix it.

    Terminal window
    sed -i "s#repository:.*#repository: $OPSTELLA_REGISTRY/worker-argocd#g" $HOME/opstella-installation/helm-values/opstella-worker-argocd-nonprod-full-values.yaml
    sed -i "s#repository:.*#repository: $OPSTELLA_REGISTRY/worker-argocd#g" $HOME/opstella-installation/helm-values/opstella-worker-argocd-prod-full-values.yaml
  1. Install Opstella Workers

    Install Helm Release using local opstella-platform Helm Chart

    Terminal window
    for key in "${OPSTELLA_ENABLED_INSTRUMENTS[@]}"
    do
    helm install worker-${key} $HOME/opstella-installation/helm-charts/opstella-platform-chart \
    --namespace opstella-system \
    -f $HOME/opstella-installation/helm-values/opstella-worker-${key}-full-values.yaml
    done
  1. Get Pod Status - Opstella Workers

    Terminal window
    kubectl get pods -n opstella-system

    Opstella Workers should be Running

    NAME READY STATUS RESTARTS AGE
    ... (deducted)
    worker-argocd-nonprod-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    worker-argocd-prod-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    worker-defectdojo-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    worker-gitlab-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    worker-grafana-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    worker-harbor-XXXXXXX-YYYYY 1/1 Running 0 XdXh
    ... (deducted)

Finished?

Use the below navigation to proceed