v2.1
v2.0
v1.0
  1. Release Notes
    1. Release Notes - 2.1.1Latest
    1. Release Notes - 2.1.0
    1. Release Notes - 2.0.2
    1. Release Notes - 2.0.1
    1. Release Notes - 2.0.0
  1. Introduction
    1. Introduction
    1. Features
    1. Architecture
    1. Advantages
    1. Glossary
  1. Installation
    1. Introduction
      1. Intro
      2. Port Requirements
      3. Kubernetes Cluster Configuration
    1. Install on Linux
      1. All-in-One Installation
      2. Multi-Node Installation
      3. High Availability Configuration
      4. Air Gapped Installation
      5. StorageClass Configuration
      6. Enable All Components
    1. Install on Kubernetes
      1. Prerequisites
      2. Install on K8s
      3. Air Gapped Installation
      4. Install on GKE
    1. Pluggable Components
      1. Pluggable Components
      2. Enable Application Store
      3. Enable DevOps System
      4. Enable Logging System
      5. Enable Service Mesh
      6. Enable Alerting and Notification
      7. Enable Metrics-server for HPA
      8. Verify Components Installation
    1. Upgrade
      1. Overview
      2. All-in-One
      3. Multi-node
    1. Third-Party Tools
      1. Configure Harbor
      2. Access Built-in SonarQube and Jenkins
      3. Enable built-in Grafana Installation
      4. Load Balancer plugin in Bare Metal - Porter
    1. Authentication Integration
      1. Configure LDAP/AD
    1. Cluster Operations
      1. Add or Cordon Nodes
      2. High Risk Operations
      3. Uninstall KubeSphere
  1. Quick Start
    1. 1. Getting Started with Multi-tenancy
    1. 2. Expose your App Using Ingress
    1. 3. Compose and Deploy Wordpress to K8s
    1. 4. Deploy Grafana Using App Template
    1. 5. Job to Compute π to 2000 Places
    1. 6. Create Horizontal Pod Autoscaler
    1. 7. S2I: Publish your App without Dockerfile
    1. 8. B2I: Publish Artifacts to Kubernete
    1. 9. CI/CD based on Spring Boot Project
    1. 10. Jenkinsfile-free Pipeline with Graphical Editing Panel
    1. 11. Canary Release of Bookinfo App
    1. 12. Canary Release based on Ingress-Nginx
    1. 13. Application Store
  1. DevOps
    1. Pipeline
    1. Create SonarQube Token
    1. Credentials
    1. Set CI Node for Dependency Cache
    1. Set Email Server for KubeSphere Pipeline
  1. Logging
    1. Log Query
  1. Developer Guide
    1. Introduction to S2I
    1. Custom S2I Template
  1. API Documentation
    1. API Guide
    1. How to Access KubeSphere API
KubeSphere®️ 2020 All Rights Reserved.

Log Query

The logs of applications and systems can help you understand what is happening inside your cluster and workloads. The logs are particularly useful for debugging problems and monitoring cluster activities. KubeSphere provides a powerful and easy-to-use logging system which offers users the capabilities of log collection, query and management in terms of tenants. Tenant-based logging system is much more useful than Kibana since different tenants can only view her/his own logs, leading much better security. Moreover, KubeSphere logging system filters out lots of redundant information.

Logging System Architecture

Logging System Architecture

KubeSphere logging system is deployed through the FluentBit operator. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. Fluent Bit collects logs from all Pods, and transfers the logs to ElasticSearch by default. The cluster admin can also specify the logs collector like Kafka or Fluentd.

  • FluentBit-operator is deployed as a DaemonSet on each node, the director /var/log/containers in the host will be mapped to the FluentBit-operator container. The Input plugin of FluentBit-operator tails the mapped log files, then the Output plugin will transfer the collected logs to ElasticSearch, Kafka, Fluentd etc. according to the configuration.
  • ElasticSearch is deployed as a StatefulSet in the cluster. The Output plugin will create the corresponding Index in ElasticSearch (defaults to create one Index per day). It creates the mapping of specified format for Kubernetes logs.
  • ElasticSearch Curator is the component that performs scheduled maintenance operations and trims logs by time. It is deployed as a CronJob to periodically run and delete the outdated logs, i.e. delete the Index. The preservation time defaults to last seven days, you can modify it according to your needs.
  • KubeSphere logging console provides the capabilities of log query, analysis and statistics for users.

Log Query

KubeSphere supports logs query for tenant isolation. Use the admin account to log in KubeSphere, choose Toolbox → Log Query.

Log Query

As shown in the pop-up window, you can see the trend of logs amount. The logging console supports the following query levels:

  • Keyword
  • Project
  • workload
  • Pod
  • container
  • Range of time
  • For example, you can use “Error”, “Fail”, “Fatal”, “Exception”, “Warning” to query the exception logs. The query rules support combinatorial keywords query, also supports exact query or fuzzy query.
  • Fuzzy query supports case-insensitive fuzzy matching and retrieval of full terms by the first half of a word or phrase because of the ElasticSearch segmentation rules. For example, you can retrieve the logs containing node_cpu_totoal by search the keyword node_cpu, but not the keyword cpu.

Log query

It also supports customizing the range of time to query. KubeSphere stores the logs for last seven days by default.

Note: You can modify the retain period in the ConfigMap elasticsearch-logging-curator.

Range of time to query

How to Query

For example, let's query the logs including the keyword error in the kubesphere-system project within last 1 hour as shown in the following screenshot:

How to query log

It returns 74 rows of results with the corresponding time, project, Pod, logs.

Click any one of the results from the list. Drill into its detail page and inspect the logs from this Pod, including the complete context at the right section. It is convenient for developers to debug and analyze.

Note: It supports dynamical refresh with 5s, 10s or 15s, and allows to export the logs to local for further analysis.

Log detail page

As you see from the left panel, you can switch to inspect another Pod and its container within the same project from the dropdown list. In this case, you can determine if any abnormal Pods affect other Pods.

Logs page

Drill into Detail Page

If the log looks abnormal, you can drill into the the Pod detail page or container detail page to deep inspect the container logs, resource monitoring graph and events.

Drill into detail page

Inspect the container detail page as follows. At the same time, it allows you to open the terminal to debug container directly.

Container detail page