Skip to content

Grafana Loki Installation

This content is not available in your language yet.

Grafana Loki is a Logs Aggregation for LGTM Observability Stack


Grafana Loki will be installed on 🟢 Management Kubernetes Cluster

  • 📥Ingress Service provided as Kubernetes Ingress Class (IngressClass)
  • 🛡️TLS Certificate for Grafana Loki provided as Kubernetes Secret
    • Grafana Loki will be exposed as HTTPS with Kubernetes Ingress.
  • 📦S3 API-compatible Object Storage ; For Logs Storage
    • 🪣S3 Buckets: A Unit of Logical Storage with 🌏Region specified.
      • Grafana Loki uses 2 separated buckets.
        • Logs Storage (Chunks)
        • Ruler Component
    • 🔑Credentials to Access S3 Bucket: Access Key, Secret Key.
      • Create/Gather a Dedicated Access Key/Secret Key for Grafana Loki to access to its buckets.

Ensure you have defined and loaded your Global Shell Variables as described in Shell Variables.

  1. Connect to 🟢 Management Kubernetes Cluster ; i.e w/ Kubeconfig File

    Ensure you have defined and loaded your Global Shell Variables as described in Shell Variables.

    Terminal window
    source $HOME/opstella-installation/shell-values/kubernetes/management_cluster.vars.sh
    source $HOME/opstella-installation/shell-values/tools/observability.vars.sh
    Terminal window
    export KUBECONFIG="$HOME/opstella-installation/kubeconfigs/management_cluster.yaml"
  2. Set 🟢 Management Kubernetes Cluster Information

    Ensure GRAFANA_LOKI_DOMAIN, K8S_INTERNAL_DOMAIN, K8S_INGRESSCLASS_NAME, K8S_STORAGECLASS_NAME, K8S_INGRESS_TLS_CERTIFICATE_SECRET_NAME are defined as per the Shell Variables guide.

  3. Create Kubernetes Secret for 🛡️ TLS Certificate for Grafana Loki in Namespace observability-system.

    Kubernetes Ingress for Grafana Loki will associate TLS Certificate with Kubernetes Secret named wildcard-${BASE_DOMAIN}-tls.

    export K8S_INGRESS_TLS_CERTIFICATE_SECRET_NAME="wildcard-${BASE_DOMAIN}-tls"

    Create one using from .crt and .key file.

    Terminal window
    kubectl create secret tls $K8S_INGRESS_TLS_CERTIFICATE_SECRET_NAME \
    --cert=/path/to/cert/file --key=/path/to/key/file \
    --namespace observability-system

    💡 Should return secret/wildcard-${BASE_DOMAIN}-tls created message.

Set S3 API-compatible Object Storage Information for Grafana Loki.

  1. Set S3 Connection with Domain

    export GRAFANA_LOKI_S3_DOMAIN="http://seaweedfs-s3.apps-supporting-services.svc:9000"
  2. Set 🪣S3 Buckets

    Grafana Loki uses 2 separated buckets.

    • Logs Storage (Chunks) named grafana-loki-chunks

      export GRAFANA_LOKI_S3_CHUNKS_BUCKET_NAME="grafana-loki-chunks"
    • Ruler Component named grafana-loki-ruler

      export GRAFANA_LOKI_S3_RULER_BUCKET_NAME="grafana-loki-ruler"
  3. Set 🌏S3 Region

    export GRAFANA_LOKI_S3_BUCKET_REGION="us-east-1"
  4. Set 🔑Credentials to Access S3 Bucket

    Access Key

    export GRAFANA_LOKI_S3_ACCESS_KEY="grafana-loki"

    Secret Key

    export GRAFANA_LOKI_S3_ACCESS_SECRET="${SEAWEEDFS_HA_S3_GRAFANA_LOKI_PASSWORD}"
  1. Set Grafana Loki Entrypoint Domain

    export GRAFANA_LOKI_DOMAIN="loki.${BASE_DOMAIN}"
  2. Create Helm Values Configuration: Fundamental Configurations

    Terminal window
    cat <<EOF > $HOME/opstella-installation/helm-values/grafana-loki-full-values.yaml
    global:
    ## -- Definitions to set up nginx resolver (nginx gateway that proxied within microservices)
    ## OPSTELLA_CUSTOMIZE/RKE2: Defaults was 'kube-dns'/Change for RKE2
    # -- Definitions to set up nginx resolver
    # -- configures DNS service name
    dnsService: ${K8S_INTERNAL_DNS_SERVICE}
    # -- configures DNS service namespace
    dnsNamespace: "kube-system"
    # -- configures cluster domain ("cluster.local" by default)
    clusterDomain: "${K8S_INTERNAL_DOMAIN}"
    ## OPSTELLA_CUSTOMIZE: Disable Built-in MinIO (it's not intended for Production uses!)
    minio:
    enabled: false
    # -- Ingress configuration Use either this ingress or the gateway, but not both at once.
    # If you enable this, make sure to disable the gateway.
    # You'll need to supply authn configuration for your ingress controller.
    ingress:
    enabled: true
    ingressClassName: ${K8S_INGRESSCLASS_NAME}
    hosts:
    - ${GRAFANA_LOKI_DOMAIN}
    tls:
    - hosts:
    - ${GRAFANA_LOKI_DOMAIN}
    secretName: ${K8S_INGRESS_TLS_CERTIFICATE_SECRET_NAME}
    #
    # Gateway and Ingress
    #
    # By default this chart will deploy a Nginx container to act as a gateway which handles routing of traffic
    # and can also do auth.
    #
    # If you would prefer you can optionally disable this and enable using k8s ingress to do the incoming routing.
    #
    # Configuration for the gateway
    gateway:
    # -- Specifies whether the gateway should be enabled
    enabled: false
    read:
    ## TODO: OPSTELLA_CUSTOMIZE/TEMP: Disable Persistence until we can measure the workload
    persistence:
    # -- Enable volume claims in pod spec
    volumeClaimsEnabled: false
    write:
    ## TODO: OPSTELLA_CUSTOMIZE/TEMP: Disable Persistence until we can measure the workload
    persistence:
    # -- Enable volume claims in pod spec
    volumeClaimsEnabled: false
    backend:
    ## TODO: OPSTELLA_CUSTOMIZE/TEMP: Disable Persistence until we can measure the workload
    persistence:
    # -- Enable volume claims in pod spec
    volumeClaimsEnabled: false
    ######################################################################################################################
    ## OPSTELLA_NOTE: Common Settings across Deployment Modes
    ### --- ###
    ## OPSTELLA_NOTE: chunksCache/resultsCache shared across SimpleScalable/Distributed
    ## TODO: OPSTELLA_CUSTOMIZE: Disable Chunks Cache for WHY?
    chunksCache:
    # -- Specifies whether memcached based chunks-cache should be enabled
    enabled: false
    ## TODO: OPSTELLA_CUSTOMIZE: Disable Result Cache for WHY?
    resultsCache:
    # -- Specifies whether memcached based results-cache should be enabled
    enabled: false
    ### --- ###
    loki:
    ## OPSTELLA_CUSTOMIZE: Utilize S3(-compatible) Object Storage By Default
    storage:
    type: s3
    bucketNames:
    # Loki requires a bucket for chunks and the ruler. GEL requires a third bucket for the admin API.
    # Please provide these values if you are using object storage.
    chunks: ${GRAFANA_LOKI_S3_CHUNKS_BUCKET_NAME}
    ruler: ${GRAFANA_LOKI_S3_RULER_BUCKET_NAME}
    s3:
    endpoint: ${GRAFANA_LOKI_S3_DOMAIN}
    region: ${GRAFANA_LOKI_S3_BUCKET_REGION}
    accessKeyId: ${GRAFANA_LOKI_S3_ACCESS_KEY}
    secretAccessKey: ${GRAFANA_LOKI_S3_ACCESS_SECRET}
    s3ForcePathStyle: true ## OPSTELLA_CUSTOMIZE
    schemaConfig:
    configs:
    - from: 2024-04-01
    store: tsdb
    object_store: s3
    schema: v13
    index:
    prefix: index_
    period: 24h
    ## OPSTELLA_CUSTOMIZE: SecurityContext
    # -- The SecurityContext for Loki pods
    podSecurityContext:
    fsGroup: 10001
    # -- The SecurityContext for Loki containers
    containerSecurityContext:
    runAsUser: 10001
    runAsGroup: 10001
    runAsNonRoot: true
    privileged: false
    allowPrivilegeEscalation: false
    capabilities:
    drop: ["ALL"]
    seccompProfile:
    type: RuntimeDefault
    readOnlyRootFilesystem: true
    EOF
  3. Create Helm Values Configuration: Loki in Distributed Deployment Modes

    Terminal window
    cat <<EOF > $HOME/opstella-installation/helm-values/grafana-loki-distributed-mode-full-values.yaml
    ## OPSTELLA_CUSTOMIZE: Use Microservices Deployment to be Default for Opstella
    deploymentMode: Distributed
    ######################################################################################################################
    ingester:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 3
    ## OPSTELLA_CUSTOMIZE: Disable zone awareness for On-Premise Environment
    zoneAwareReplication:
    # -- Enable zone awareness.
    enabled: false
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for ingester pods. Ignored if zoneAwareReplication is enabled.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: ingester
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    querier:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 3
    maxUnavailable: 2
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for querier pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: querier
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    queryFrontend:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 2
    maxUnavailable: 1
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for query-frontend pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: query-frontend
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    queryScheduler:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 2
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for query-scheduler pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: query-scheduler
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    distributor:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 3
    maxUnavailable: 2
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for distributor pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: distributor
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    compactor:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 1
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for compactor pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: compactor
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    indexGateway:
    ## CHART/MODE-Distributed: Grafana Recommeded No. of Component Replicas/maxUnavailable
    replicas: 2
    maxUnavailable: 1
    ## OPSTELLA_CUSTOMIZE: When install LGTM altogether within the same namespace; default podAntiAffinity from Helm Chart would conflict each other,
    ## need **additional label** for Kubernetes Scheduler to be factored
    # -- Affinity for index-gateway pods.
    # @default -- Hard node anti-affinity
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchLabels:
    app.kubernetes.io/component: index-gateway
    app.kubernetes.io/name: loki
    topologyKey: kubernetes.io/hostname
    ### --- ###
    # CHART: Disable Optional Experimental Components
    bloomCompactor:
    replicas: 0
    bloomBuilder:
    replicas: 0
    bloomGateway:
    replicas: 0
    ### --- ###
    ### --- ###
    ## CHART/MODE-Distributed: Disable other deployment modes
    backend:
    replicas: 0
    read:
    replicas: 0
    write:
    replicas: 0
    singleBinary:
    replicas: 0
    ### --- ###
    ### --- ###
    ## CHART/MODE-Distributed: Loki Configuration by Grafana Recommendations
    loki:
    ingester:
    chunk_encoding: snappy
    tracing:
    enabled: true
    querier:
    # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
    max_concurrent: 4
    ### --- ###
    EOF
  1. Add Grafana Helm Repository

    Terminal window
    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update
  1. Install Grafana Loki

    • Install a Helm Release with specific Helm Chart Version --version 6.28.0 (App Version: 3.4.2)

      Terminal window
      helm install grafana-loki grafana/loki --version 6.28.0 \
      --namespace observability-system \
      -f $HOME/opstella-installation/helm-values/grafana-loki-full-values.yaml \
      -f $HOME/opstella-installation/helm-values/grafana-loki-distributed-mode-full-values.yaml
  1. Get Pods Status

    Terminal window
    kubectl get pods -n observability-system

    💡 Grafana Loki (Distributed Deployment Mode Components) Pods should be Running

    NAME READY STATUS RESTARTS AGE
    ... (deducted)
    grafana-loki-compactor-0 1/1 Running 0 Xd
    grafana-loki-distributor-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-distributor-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-distributor-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-index-gateway-0 1/1 Running 0 Xd
    grafana-loki-index-gateway-1 1/1 Running 0 Xd
    grafana-loki-ingester-0 1/1 Running 0 Xd
    grafana-loki-ingester-0 1/1 Running 0 Xd
    grafana-loki-ingester-0 1/1 Running 0 Xd
    grafana-loki-querier-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-querier-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-querier-XXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-query-frontend-XXXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-query-frontend-XXXXXXXX-YYYYY 1/1 Running 0 Xd
    grafana-loki-query-scheduler-XXXXXXXX-YYYYY 1/1 Running 0 Xd
    loki-canary-XXXXX 1/1 Running 0 Xd
    loki-canary-XXXXX 1/1 Running 0 Xd
    loki-canary-XXXXX 1/1 Running 0 Xd
    loki-canary-XXXXX 1/1 Running 0 Xd
    loki-canary-XXXXX 1/1 Running 0 Xd

Finished?

Use the below navigation to proceed