Basics to KEN Stack — Log Analytics Solution

Andrew James
3 min readMar 13, 2021

What’s Log Analytics? Servers, applications, websites, and connected devices generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is called log analytics. Early log analytics solutions were designed for IT operational intelligence use cases, such as root cause analysis and infrastructure monitoring. Over time, log analytics solutions have incorporated additional data sources, machine learning, and other analytical techniques to enable additional use cases in application performance management (APM), security intelligence and event management (SIEM), and business analytics.

Here I am going to discuss about one of the famous and freeware solution: KEN Stack. KEN stands for Kibana — a Viz. tool, Elasticsearch — search engine and nxlog — log sourcing tool.

Kibana UI

More information, more power. As much you source information, we can make something useful out of it. You can even predict if one of the servers in your estate going to crash. This could save a lot of money if your business is dependent on internet. How cool is that? Yes, provided if you have enough Historic data.

Let’s quickly get down to what the mentioned tools does:

nxlog:

It works as agent/forwarder reading the operating system logs from the source machines and forward it to the (Data Center) Collector. These bad boys have a light foot-print in terms of CPU / Memory on source machines. The components of nxlog is as follows:

Forwarder (Agent): Agents installed on machines read log data and forward it to a Central Log repository (collector). It can also listen to the Network ports and receive scripted inputs to be forwarded to the Log collector.

Input module: This directive specifies the name of the registered input reader function to be used for parsing raw events from input data. Example: im_file, im_exec, im_ssl, im_tcp, im_udp, im_uds.

Output Module: In this module we describe the function to be used for formatting raw events when sending to different destinations. Example: om_file, om_exec, om_ssl, om_tcp, om_udp, om_uds.

Collector: Log Collector aggregates the log data collected and forwards it to Indexer (Elasticsearch Cluster).

Find detailed explanation here

Elasticsearch:

Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements. Elasticsearch Cluster consists of multiple instances of Elasticsearch processes running on different machines. The Cluster comprises of:

  • Data Nodes — Responsible for storage and retrieval of data.
  • Master Nodes — Responsible for managing the status of the cluster.
  • Client Nodes — An Optional node and it’s responsible for client communications.

Cluster serves to provide Real time High Availability and Load Distribution of Data for read and write operations by maintaining multiple copies of data and by providing multiple I/O points, respectively.

Kibana:

Kibana is a window into the Elastic Stack. It has reporting and visualization capabilities using a browser. It enables visual exploration and real-time analysis of your data in Elasticsearch. Usually, it will be installed on one of the Elasticsearch node.

Let’s see the deployment of each tool in my coming write-ups. Let me know how you feel so far in the comments section.

Cheers!

Source: AWS Web, Elastic.co, nxlog.co

--

--

Andrew James
0 Followers

eCommerce | Products & Programs | Analytics | Design