Analyzing Syslogs using various tools and techniques

In this page

  • Tools to extract information from syslog daemons.
  • -Grep tool
  • -Regular Expressions
  • -Surround Search
  • -Tail
  • -Cut
  • -Filtering on errors with awk
  • -Third-party log management software

Tools to extract information from syslog daemons.

Syslogs contain vital information that can help in analyzing the overall health of your network devices. However, understanding the syslogs and extracting information from them could be a little tricky. You can use the following tools to extract necessary information from your syslog daemons.

Grep tool

Grep is a simple search tool which is in-built in all Linux distributions, and available for Windows and Mac OSes. You can execute simple text queries in your Command Line Interface (CLI) to extract the necessary log.

Syntax:

$ grep '<text to be searched>' <source of log file>

This search mechanism supports only exact match criteria. So, the downfall here is you have to exactly know what log information you are looking for.

Regular Expressions

Regular Expressions (regex) help you to construct search queries with a combination of strings rather than one single element to extract the desired log data.

Consider, you search for an event ID "4325" in your auth.log file. This search can return logs with event ID 4325, port number 4325, a timestamp that contains 4325 and other irrelevant fields. Using regex resolves this ambiguity. You can build a regex search query to return log data that has 4325 preceded by "event ID".

In this case, the regex query looks like this:

$ grep -P “(?<= event ID)4325” var/log/auth.log

Syntax:

$ grep -P '<regex and text to be searched>' <source of log file>
Note: -P indicates Perl Regular Expression syntax.

It can be difficult to build a good regex search query, but it helps to identify and extract relevant log data.

Surround Search

This is another command line search option you can use to extract necessary log data along with the events that were logged before and after the specific log information. This way you can see the list of events carried out during an anomaly and predict any malicious activity taking place in your network.

For example, we can search for "failed login attempts" and obtain a list of 5 events logged before and after a failed login. The query can be phrased as follows:

$ grep -B 5 -A 5 'failed login' var/log/auth.log
Note:

-B <number> is used to extract <number> of events before the searched log.

-A <number> is used to extract <number> of events after the searched log.

General syntax:

$ grep -B <number> -A <number> '<text to be searched>' <source of log file>

Tail

Tail is also a command line utility that can be used to view the last few entries of a log file or the latest changes made to a file. This can be used to view on-going processes such as a system reboot, sudden shutdown of devices or a new installation on a device. This can be used along with the grep and surround search commands to build strong search queries.

Syntax:

$ tail -n <number> <source of log file>
Note: -n <number> denotes the number of lines to be extracted from the bottom of the file.

Cut

Using cut command in the CLI, you can parse the fields in log data that have delimiters. You can also use cut command along with grep and regex commands to extract a particular field from your log data.

$cut -d "delimiter" -f (number) <source of log file>
Note: -f (number) denotes the field number to be extracted.

Filtering on errors with awk

The awk command can be used to search for error messages in the log files. To specify the error fields in the logs, you have to add a template to include pri-text (which is the PRI part of the message in a textual form with the numerical value given in brackets- example: “local0.err<133>”) in your rsyslog configuration.

Enter a command with the following syntax <%pri-text%>: %field%, %field%, %field%... to include error messages and necessary fields in the logs. Now, using awk command extract error messages from these logs.

Syntax:

$ awk '/.err>/ {print}' source of log file>
Note:

{print} can be used to print the log data into a destination file as well by adding <source of destination file> at the end of the command.

Third-party log management software

All the above tools and techniques can perform only specific functions like searching, displaying the latest changes, viewing error logs, etc. If you need to perform an in-depth analysis of syslog data, you need to use combination of the above techniques which would be time-consuming and complex. Log management solutions resolve this problem as they start from collecting log data in a single central location to perform in-depth analysis and display the results in the form of intuitive reports, alerts you about critical events, and even stores the logs securely for forensic analysis.

EventLog Analyzer is a log management solution that acts as a syslog daemon and collects logs from all devices across your network. It effectively searches log data using its powerful search engine that can accept queries in the form of texts, phrases, or Boolean operators. It can perform basic like finding out the exact match to the strings given in the queries or advanced searches like using regex on the collected log data. It allows you to write your own search regex with the assistance of logical operators.You can save a search result and even configure alerts for the same using EventLog Analyzer which generates reports for each event from time to time to help you detect anomalies. You can set up alerts for deviant network activities and get notified in real time via SMS/email in case of an impending attack. Click here to know more.

What's next?

Leverage EventLog Analyzer’s advanced analytics to streamline syslog analysis across your environment.