This is the Experimental version (Latest). It is under active development and may change. For the most reliable documentation, use the version selector in the top-right to switch to Stable, or click here to go to the Stable version's homepage.
Opstella Workers
อัพเดทล่าสุด:
เนื้อหานี้ยังไม่มีในภาษาของคุณ
🟢 Management
Opstella Workers are Microservices to integrate with DevSecOps Tools, Observability Tools, Keycloak, and Kubernetes.
Preparation
Section titled “Preparation”Determine Required Opstella Workers
Section titled “Determine Required Opstella Workers”As you have determined number of tools/instruments in Software Resources Preparation/Determine List of Opstella Workers Section
Number of Opstella Workers will depend on DevSecOps Tools, Observability Tools enabled within the entire platform as it could be vary by your requirement.
For Example:
Hard Requirement: Opstella Keycloak Service - Install worker-keycloak
Hard Requirement: Kubernetes - Install worker-kubernetes
DevSecOps:
- ArgoCD: Install
worker-argocd - DefectDojo: Install
worker-defectdojo - GitLab: Install
worker-gitlab - Headlamp: Install
worker-headlamp - Harbor: Install
worker-harbor - SonarQube: Install
worker-sonarqube - Vault: Install
worker-vault
Observability:
- Grafana LGTM Stack: Install
worker-grafana,worker-loki,worker-tempo
In conclusion, this is the list of Opstella Workers that required to install
worker-keycloakworker-kubernetesworker-argocd- Due to Opstella Architecture and Kubernetes Clusters requirements that we followed the Reference Architecture, it must have
worker-argocdfor🟦 Non-Production Workloadand🟥 Production Workload.worker-argocd-nonprodworker-argocd-prod
- Due to Opstella Architecture and Kubernetes Clusters requirements that we followed the Reference Architecture, it must have
worker-defectdojoworker-gitlabworker-headlampworker-harborworker-sonarqubeworker-vaultworker-grafanaworker-lokiworker-tempo
List of Opstella Workers will be
#!/bin/bashexport OPSTELLA_ENABLED_INSTRUMENTS=(keycloak kubernetes argocd-nonprod argocd-prod defectdojo gitlab headlamp-nonprod headlamp-prod harbor sonarqube vault grafana loki tempo)Opstella Worker Preparation
Section titled “Opstella Worker Preparation”-
Connect to
🟢 ManagementKubernetes Cluster ; i.e w/ Kubeconfig FileEnsure you have defined and loaded your Global Shell Variables as described in Shell Variables.
Terminal window source $BASE_WORKING_DIR/shell-values/kubernetes/management_cluster.vars.shEnsure
BASE_DOMAINis defined as per the Shell Variables guide. -
Helm Values Preparation
Important Configurations
image.repository,image.tag: Set your Container Image Location, Version
(Ensure
OPSTELLA_REGISTRYis loaded from your variables)(Ensure
OPSTELLA_REGISTRYis loaded from your variables)Terminal window for KEY in "${OPSTELLA_ENABLED_INSTRUMENTS[@]}"doif echo $KEY | grep -e "^argocd"; thenexport WORKER_NAME=$KEYexport IMAGE_NAME=argocdelif echo $KEY | grep -e "^headlamp"; thenexport WORKER_NAME=$KEYexport IMAGE_NAME=kuberneteselif echo $KEY | grep -e "^kubernetes"; thenexport WORKER_NAME=$KEYexport IMAGE_NAME=kuberneteselseexport WORKER_NAME=$KEYexport IMAGE_NAME=$KEYficat <<EOF > $BASE_WORKING_DIR/helm-values/opstella-worker-${WORKER_NAME}-full-values.yamlimage:repository: ${OPSTELLA_REGISTRY}/platform/worker-${IMAGE_NAME}tag: ${OPSTELLA_WORKER_IMAGE_TAG}pullPolicy: AlwaysnameOverride: worker-${WORKER_NAME}fullnameOverride: worker-${WORKER_NAME}serviceAccount:name:imagePullSecrets:- name: registry-secretenv:- name: WORKER_NAMEvalue: ${WORKER_NAME}- name: INVOKE_URLvalue: http://localhost:3500/v1.0/invoke/opstella-core/method- name: STATE_STORE_URLvalue: http://localhost:3500/v1.0/state/statestore- name: INVOKE_URL_WORKERvalue: http://localhost:3500/v1.0/invokecontainerPorts: 3000service:port: 3000# Waiting time in seconds for shutting down pod after sent SIGTERM# terminationGracePeriodSeconds: 30healthCheck:enabled: trueliveness:httpGet:path: "/healthcheck"port: 3000initialDelaySeconds: 180periodSeconds: 30readiness:httpGet:path: "/"port: 3000initialDelaySeconds: 10periodSeconds: 5podAnnotations:dapr.io/enabled: "true"dapr.io/app-id: "${WORKER_NAME}"dapr.io/app-port: "3000"dapr.io/enable-api-logging: "true"dapr.io/config: "config"dapr.io/sidecar-seccomp-profile-type: "RuntimeDefault"podSecurityContext:fsGroup: 1000securityContext:seccompProfile:type: RuntimeDefaultcapabilities:drop: ["ALL"]runAsNonRoot: trueprivileged: falseallowPrivilegeEscalation: falserunAsGroup: 1000runAsUser: 1000EOFdoneSo that’s why
worker-argocdare needed to deploy a separate instances asworker-argocd-nonprodworker-argocd-prod
However, its container image can be shared.
In the previous Helm Values Configuration section, we have intented to ease the operation but mistakenly created the incorrect configuration. This step will fix it.
Terminal window sed -i "s#repository:.*#repository: $OPSTELLA_REGISTRY/worker-argocd#g" $BASE_WORKING_DIR/helm-values/opstella-worker-argocd-nonprod-full-values.yamlsed -i "s#repository:.*#repository: $OPSTELLA_REGISTRY/worker-argocd#g" $BASE_WORKING_DIR/helm-values/opstella-worker-argocd-prod-full-values.yaml
Installation
Section titled “Installation”-
Install Opstella Workers
Install Helm Release using local
opstella-platformHelm ChartTerminal window for key in "${OPSTELLA_ENABLED_INSTRUMENTS[@]}"dohelm upgrade --install worker-${key} \oci://asia-southeast1-docker.pkg.dev/opstella-dev/opstella-charts/generic-deployment \--version 0.3.15 \--namespace opstella-system \-f $BASE_WORKING_DIR/helm-values/opstella-worker-${key}-full-values.yamldone
Post-Installation
Section titled “Post-Installation”Opstella Workers Testing
Section titled “Opstella Workers Testing”-
Get Pod Status - Opstella Workers
Terminal window kubectl get pods -n opstella-systemOpstella Workers should be
RunningNAME READY STATUS RESTARTS AGE... (deducted)worker-argocd-nonprod-XXXXXXX-YYYYY 1/1 Running 0 XdXhworker-argocd-prod-XXXXXXX-YYYYY 1/1 Running 0 XdXhworker-defectdojo-XXXXXXX-YYYYY 1/1 Running 0 XdXhworker-gitlab-XXXXXXX-YYYYY 1/1 Running 0 XdXhworker-grafana-XXXXXXX-YYYYY 1/1 Running 0 XdXhworker-harbor-XXXXXXX-YYYYY 1/1 Running 0 XdXh... (deducted)
Finished?
Use the below navigation to proceed