TheKoguryo's Tech Blog

Version 2023.06.19

Warning

This content has been generated by machine translation. The translations are automated and have not undergone human review or validation.

4.2.1 Installing EFK

Install Elastic Search + Kibana

  1. Create a namespace for installation.

    kubectl create ns logging
    
  2. Register the repository for installation via Helm Chart. This example uses the Bitnami Helm Chart repository.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    
  3. Define setpoints

    When installing Helm Chart, refer to the list of configurable parameters and input the value you want to change.

    cat <<EOF > values.yaml
    global:
      kibanaEnabled: true
    kibana:
      ingress:
        enabled: true
        hostname: kibana.ingress.thekoguryo.ml
        annotations:
          kubernetes.io/ingress.class: nginx
          cert-manager.io/cluster-issuer: letsencrypt-staging
        tls: true
    EOF
    
  4. Install elasticsearch helm chart

    helm install elasticsearch -f values.yaml bitnami/elasticsearch -n logging
    
  5. Installation

    It is installed as follows, and it takes some time for the actual container to start.

    oke_admin@cloudshell:~ (ap-seoul-1)$ helm install elasticsearch -f values.yaml bitnami/elasticsearch -n logging
    NAME: elasticsearch
    ...
    
      Elasticsearch can be accessed within the cluster on port 9200 at elasticsearch-coordinating-only.logging.svc.cluster.local
    
      To access from outside the cluster execute the following commands:
    
        kubectl port-forward --namespace logging svc/elasticsearch-coordinating-only 9200:9200 &
        curl http://127.0.0.1:9200/
    
  6. Check the installed elastic search internal address and port. This is the address that Fluentd will use for future log transfers.

    • Address: elasticsearch-coordinating-only.logging.svc.cluster.local
    • Port: 9200

Configure Fluentd

References

  1. Create a Service Account for Fluentd installation and define the relevant permissions.

  2. configmap to define additional settings

    • In the Fluentd container image, log parsing related settings are all defined in a .conf file under /fluentd/etc/ in the container image. You can override these files. Here we leave the other settings unchanged and change only the Parser.
    • The default Parser works well when Docker Engine is the runtime, but a parsing error occurs in containerd, the default runtime of OSS Kubernetes, and cri-o, which is used in OKE. For normal parsing, only the parser setting (tail_container_parse.conf) is changed to cri Parser as shown below.
    • https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-831801690
  3. Define a fluentd daemonset

    You have made some changes to the YAML in the Fluentd documentation to use the configured configmap.

  4. Install FluentD

    kubectl apply -f fluentd-rbac.yaml
    kubectl apply -f fluentd-configmap-elasticsearch.yaml
    kubectl apply -f fluentd-daemonset-elasticsearch.yaml
    

Kibana settings

  1. Access the installed kibana with a web browser. Connect to the address specified by ingress.

  2. On the Welcome page, click Add Data to go home.

  3. Click Analytics > Discover in the upper left navigation menu.

    image-20211210170328113

  4. Click Create index pattern.

  5. Create an index pattern.

    As you can see on the right, logs sent from FluentD start with logstash-.

    • Name: logstash-*
    • Timestamp field: @timestamp

    image-20211210170534407

  6. You can see the result of adding the index pattern.

    image-20211210171057888

  7. Click Analytics > Discover in the upper left navigation menu.

  8. You can check the collected logs through the created index pattern.

    • To check the log of the test app, click Add filter to specify namespace_name=default .

    image-20211210171753800

  9. Access the test app.

  10. Check the log

    You can check the log of the test app in kibana as below.

    image-20211210172649047

  11. This was an example of collecting logs on OKE through EFK. For more information about EFK, please refer to the product related website and community site.



As an individual, this article was written with my personal time. There may be errors in the content of the article, and the opinions in the article are personal opinions.

Last updated on 10 Dec 2021