You can support us by downloading this article as PDF from the Link below. Download the guide as PDF

As a cluster administrator, you will definitely want to aggregate all the logs from your OpenShift Container Platform cluster, such as container logs, node system logs, application container logs, and so forth. In this article we will schedule cluster logging pods and other resources necessary to support sending of Logs, Events and Cluster Metrics to Splunk.

We will be using Splunk Connect for Kubernetes which provides a way to import and search your OpenShift or Kubernetes logging, object, and metrics data in Splunk. Splunk Connect for Kubernetes utilizes and supports multiple CNCF components in the development of these tools to get data into Splunk.

Setup Requirements

For this setup you need the following items.

  • Working OpenShift Cluster with oc command line tool configured. Administrative access is required.
  • Splunk Enterprise 7.0 or later
  • Helm installed in your workstation
  • At least two Splunk Indexes
  • An HEC token used by the HTTP Event Collector to authenticate the event data

There will be three types of deployments on OpenShift for this purpose.

  1. Deployment for collecting changes in OpenShift objects.
  2. One DaemonSet on each OpenShift node for metrics collection.
  3. One DaemonSet on each OpenShift node for logs collection.

The actual implementation will be as shown in the diagram below.

Step 1: Create Helm Indexes

You will need at least two indexes for this deployment. One for logs and events and another one for Metrics.

Login to Splunk as Admin user:

Create events and Logs Index. The Input Data Type Should be Events.

For Metrics Index the Input Data type can be Metrics.

Confirm the indexes are available.

Step 2: Create Splunk HEC Token

The HTTP Event Collector (HEC) lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. As HEC uses a token-based authentication model we need to generate new token.

This is done under Data Inputs configuration section.

Select “HTTP Event Collector” then fill in the name and click next.

In the next page permit the token to write to the two indexes we created.

Review and Submit the settings.

Step 3: Install Helm

If you don’t have helm already installed in your workstation or bastion server checkout the guide in below link.

Install and Use Helm 3 on Kubernetes Cluster

You can validate installation by checking available version of helm.

$ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

Step 4: Deploy Splunk Connect for Kubernetes

Create a namespace for Splunk connect namespace.

$ oc new-project splunk-hec-logging

Upon project creation it should be your current working project. But you can as well switch to the project at any point in time.

$ oc project splunk-hec-logging

Create values yaml file for the installation.

$ vim ocp-splunk-hec-values.yaml

Mine has been modified to look similar to below.

global:
  logLevel: info
  journalLogPath: /run/log/journal
  splunk:
    hec:
      host: <splunk-ip> # Set Splunk IP address
      port: <splunk-hec-port> # Set Splunk HEC port
      protocol: http
      token: <hec-token> # Hec token created
      insecureSSL: true
      indexName: <indexname> # default index if others not set
  kubernetes:
    clusterName: "<clustername>"
    openshift: true
splunk-kubernetes-metrics:
  enabled: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <metrics-indexname>
  kubernetes:
    openshift: true
splunk-kubernetes-logging:
  enabled: true
  logLevel: debug
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <logging-indexname>
  containers:
    logFormatType: cri
  logs:
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver/audit.log
splunk-kubernetes-objects:
  enabled: true
  kubernetes:
    openshift: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName:  <objects-indexname>

Fill the values accordingly then initiate deployment. Get latest release URL before installation.

helm install splunk-kubernetes-logging -f ocp-splunk-hec-values.yaml https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/1.4.3/splunk-connect-for-kubernetes-1.4.3.tgz

Deployment output:

NAME: splunk-kubernetes-logging
LAST DEPLOYED: Thu Oct 22 22:22:51 2020
NAMESPACE: splunk-logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
███████╗██████╗ ██╗     ██╗   ██╗███╗   ██╗██╗  ██╗██╗
██╔════╝██╔══██╗██║     ██║   ██║████╗  ██║██║ ██╔╝╚██╗
███████╗██████╔╝██║     ██║   ██║██╔██╗ ██║█████╔╝  ╚██╗
╚════██║██╔═══╝ ██║     ██║   ██║██║╚██╗██║██╔═██╗  ██╔╝
███████║██║     ███████╗╚██████╔╝██║ ╚████║██║  ██╗██╔╝
╚══════╝╚═╝     ╚══════╝ ╚═════╝ ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝

Listen to your data.

Splunk Connect for Kubernetes is spinning up in your cluster.
After a few minutes, you should see data being indexed in your Splunk.

If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com

Check running nodes:

$ oc get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
splunk-kubernetes-logging-splunk-kubernetes-metrics-4bvkp         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-4skrm         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-55f8t         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-7xj2n         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-8r2vj         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-agg-5bppqqn   1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-f8psk         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-fp88w         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-s45wx         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-xtq5g         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-objects-b4f8f4m67vg   1/1     Running   0          48s

Give Privileged SCC to service accounts

for sa in $(oc  get sa --no-headers  | grep splunk | awk '{ print $1 }'); do
  oc adm policy add-scc-to-user privileged -z $sa
done

Login to Splunk and check if Logs, Events and metrics are being send.

This might not be the Red Hat recommended way of Storing OpenShift Events and Logs. Refer to OpenShift documentation for more details on Cluster Logging.

More articles on OpenShift:

Grant Users Access to Project/Namespace in OpenShift

Configure Chrony NTP Service on OpenShift 4.x / OKD 4.x

How To Install Istio Service Mesh on OpenShift 4.x

You can support us by downloading this article as PDF from the Link below. Download the guide as PDF