Velero Installation Overview
Velero is a Kubernetes utility used for backing up and restoring Kubernetes cluster resources and persistent volumes.
Velero will be installed on 🟢 Management 🟦 Non-Production Workload 🟥 Production Workload Clusters.
Installation
Section titled “Installation”The following steps describe how to install Velero on a Kubernetes cluster, utilizing SeaweedFS HA as the S3-compatible backend storage.
-
Connect to the Kubernetes Cluster ; _i.e w/ Kubeconfig File.
Ensure you have defined and loaded your Global Shell Variables as described in Shell Variables.
Terminal window source $HOME/opstella-installation/shell-values/kubernetes/YOUR-KUBERNETES-CLUSTER.vars.shTerminal window export KUBECONFIG="/PATH/TO/YOUR/KUBECONFIG" -
Export Required Shell Variables
Export the variables needed for S3 authentication and endpoint configuration.
Terminal window # S3 Credentials and Endpointexport SEAWEEDFS_HA_S3_VELERO_PASSWORD="CHANGEME"export SEAWEEDFS_HA_API_DOMAIN="seaweedfs-s3.${BASE_DOMAIN}" -
Create Namespace for Velero
Terminal window kubectl create namespace cluster-utilities -
Create S3 Storage Credential Secret
This secret contains the AWS-style credentials formatted for Velero to connect to SeaweedFS.
Terminal window cat <<EOF > $HOME/opstella-installation/kubernetes-manifests/velero.yaml---apiVersion: v1kind: Secrettype: Opaquemetadata:name: velero-object-storage-credentialsnamespace: cluster-utilitiesstringData:cloud: |[default]aws_access_key_id=veleroaws_secret_access_key=${SEAWEEDFS_HA_S3_VELERO_PASSWORD}EOFTerminal window kubectl apply -f velero.yaml -
Add Velero Helm Repository
Terminal window helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts/helm repo update -
Create Velero Helm Values
Terminal window cat <<EOF > $HOME/opstella-installation/helm-values/velero-values.yamlfullnameOverride: veleronameOverride: velero# Whether to deploy the node-agent daemonset.deployNodeAgent: true# Exclude Velero from Backing up itselfpodLabels:velero.io/exclude-from-backup: "true"labels:velero.io/exclude-from-backup: "true"# SecurityContext to use for the Velero deployment. Optional.# Set fsGroup for `AWS IAM Roles for Service Accounts`# see more informations at: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.htmlpodSecurityContext:fsGroup: 1000# Container Level Security Context for the 'velero' container of the Velero deployment. Optional.# See: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-containercontainerSecurityContext:allowPrivilegeEscalation: falserunAsNonRoot: truerunAsUser: 1000capabilities:drop: ["ALL"]add: []readOnlyRootFilesystem: trueseccompProfile:type: RuntimeDefault# Install Velero PluginsinitContainers:- name: velero-plugin-for-awsimage: velero/velero-plugin-for-aws:v1.13.2securityContext:allowPrivilegeEscalation: falserunAsNonRoot: truerunAsUser: 1000capabilities:drop: ["ALL"]add: []readOnlyRootFilesystem: trueseccompProfile:type: RuntimeDefaultvolumeMounts:- mountPath: /targetname: pluginsupgradeCRDs: true# This job is meant primarily for cleaning up CRDs on CI systems.# Using this on production systems, especially those that have multiple releases of Velero, will be destructive.cleanUpCRDs: falsekubectl:image:repository: bitnamilegacy/kubectltag: "1.33.4"containerSecurityContext:allowPrivilegeEscalation: falserunAsNonRoot: truerunAsUser: 1000capabilities:drop: ["ALL"]add: []readOnlyRootFilesystem: trueseccompProfile:type: RuntimeDefault# Use Pre-created Kubernetes Secret Credentialconfiguration:# logLevel: debugnamespace: cluster-utilitiesbackupStorageLocation:- name: defaultdefault: trueprovider: awsbucket: k8s-velero-backupsprefix: opstella-clusterconfig:region: us-east-1s3ForcePathStyle: trues3Url: "https://${SEAWEEDFS_HA_API_DOMAIN}/"# Use Pre-created Kubernetes Secret Credentialcredentials:existingSecret: velero-object-storage-credentials# Whether to create volumesnapshotlocation crd, if false => disable snapshot featuresnapshotsEnabled: false# Setup Scheduled Backupschedules:backup-daily:schedule: "@daily"template:# Opt-In ApproachincludedNamespaces:- '*'ttl: 336h# defaultVolumesToFsBackup: true # Opt-Out Approach# Enable Restore Helper ContainerconfigMaps:fs-restore-action-config:labels:velero.io/plugin-config: ""velero.io/pod-volume-restore: RestoreItemActiondata:image: velero/velero:v1.17.1cpuRequest: 200mmemRequest: 128MicpuLimit: 200mmemLimit: 128MisecCtx: |capabilities:drop:- ALLadd: []allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1001runAsGroup: 999EOF -
Install Velero Helm Release
Install Velero into the
cluster-utilitiesnamespace using the provided values.Terminal window helm upgrade --install velero vmware-tanzu/velero \--namespace cluster-utilities \--version 11.3.2 \-f $HOME/opstella-installation/helm-values/velero-values.yaml
Post-Installation
Section titled “Post-Installation”-
Verify Pod Status
Terminal window kubectl get pods -n cluster-utilities -l app.kubernetes.io/name=velero💡 The Velero pod should be
Running:NAME READY STATUS RESTARTS AGEvelero-XXXXXXXXXX-YYYYY 1/1 Running 0 ... -
Verify Backup Storage Location
Check that Velero has successfully connected to the SeaweedFS S3 bucket.
Terminal window kubectl get backupstoragelocation -n cluster-utilitiesThe status should show
Available.
Finished?
Use the below navigation to proceed