Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Log Management 101: Log Sources to Monitor

Log management software gives the primary diagnostic data in your applications’ development, deployment, and maintenance. However, choosing the Log sources to log and monitor could often be a daunting task. The primary cause of concern in monitoring all sources is the high price tag that many SIEM tools in the market charge based on the number of users and sources ingesting logs. At observIQ, we offer unlimited users and sources. You only pay for what you ingest and how long you retain; if your retention needs are minimal, you can use observIQ for free. Ingesting logs from some sources such as firewalls, IDS, active directories, and endpoint tools is pretty straightforward. But some critical sources of your incident response plan could have complex configurations which may deter you away from implementing Log Management for those sources. With advanced log management tools such as observIQ, most sources come preconfigured to work with the log agent, making adding sources simple. As a best practice in logging, businesses evaluate and implement logging for sources that:

  • Are required for actionable solutions such as monitoring, troubleshooting, etc. The suggested best practice is to archive non-actionable logs if necessary for compliance. This evaluation can tone down the noise in your log ingestion and make a more need-based log management process. 
  • Maximize the return of investment through increased visibility into application events 
  • Address the existing need for compliance 
  • Cover event logs from areas of their network/ infrastructure that are prone to hacks and malicious attacks
  • Voids the blind spots in a network

On the web, you will find generic information about log sources and what you can do with data that is ingested into any SIEM tool. But there is a critical step that every business/ individual evaluating a log management tool must do; take stock of all your network/application components. In-house or custom applications developed recently often create logs in a standard format such as JavaScript Object Notation (JSON) or Syslog. All logs are never saved in the standard format. Logs from servers and firewalls are easily ingestible and are parsed seamlessly. However, when working with DNS and other physical security platforms, log management is a challenge. Not logging DNS or security platforms could deter the security and block insights into key network components. Many businesses skip challenging logging components fearing the high human effort involved in developing the necessary plug-ins for ingesting from a multitude of sources. Multiple surveys conducted from organizations of all levels estimate that businesses ingest less than half of the sources in their network. Be aware of all aspects of your application and infrastructure that you need to log from. 

  1. DNS

It provides highly detailed information about DNS data that is sent and received by the DNS server. DNS attacks include DNS hijacking, DNS tunneling, Denial-of-Service (DoS) attacks, Command and control, and cache poisoning. Hence, DNS logs help to identify information related to these attacks so that source could be found out. These include detailed data on records requested, client IP, request flags, zone transfers, query logs, rate timing, and DNS signing.

DNS logs are a wealth of data from user site access, malicious site requests, DNS attacks, Denial of Service (DoS) events, etc. Based on the DNS logs, businesses can make critical network security decisions. DNS is the basic protocol that facilitates applications and web browsers to operate based on domain names. Although DNS was initially not intended for general purpose tunneling, there have been many utilities developed to tunnel over DNS. Since the data transfer capabilities of DNS are unintentional, DNS is often an ignored space in log ingestion. If the tunneling capabilities of DNS are not monitored, it could lead to a severe risk to a network. The two core components that need monitoring in DNS are payload and traffic analysis. 

Logging all components of DNS can become very noisy and make analysis impossible. The DNS log data is often voluminous and is in a multi-line format. Listed below are a few common scenarios where the DNS logs ingestion is mandatory and helpful:

  • When the DNS queries are done using the TCP instead of the UDP
  • When there are requests from an internal RFC 1918 IP addresses that are not associated with the business’ domain. 
  • When a zone transfer occurs to an unauthorized/ unknown DNS server
  • When a record request mentions an unconventional file type and has many meaningless characters embedded in it.
  • When there are requests to non-standard ports at hours that are not standard usage times. 
  • When there are country domain extensions that are uncommon from the business’ network, such as .ru(Russia) or .cn(China). A very common trigger will be if the business does not operate in those countries.
  • When there are increased lookup requests and failures.
  1. Packet Capture Logs

Packet Capture(PCAP) is an API used to capture the packet data information for the network traffic in the OSI layers. It is important to note that PCAP does not log by itself. Instead, a network analyzer collects and records the packet data information. The events logged in a .pcap file include:

  • Malware detections
  • Bandwidth usage
  • DHCP server issues
  • DNS events

During network threat analysis, the packet file data helps detect all network intrusions and all other suspicious packet transfers. The packet data helps drill down to the root cause in a network issue. For example, if you see a response failure from an application call, a study of the packet rate and packet information can reveal if the issue is with the request or the response. PCAP logs are in a simple format, making ingestion and parsing simple for most log agents. 

  1. Cloud platform logs

 The majority of businesses host their applications on cloud platforms such as Google Cloud Platform(GCP), Amazon Web Services(AWS), etc. It is inevitable to have the log management software ingest the logs from these cloud platforms. Many businesses shy away from ingesting cloud platform logs owing to the discrepancies in the log formats and the parsing agent. Companies like observIQ have ready-to-use plug-ins for these platforms, making log ingestion from these platforms possible in a matter of minutes. Our log agent, Stanza, can manage the number of events in the cloud platforms efficiently, an area where most log parsers fail. However, it is recommended to limit the ingestion to actionable events only to reduce the noise in the cloud platform events to a manageable level. Some of the most critical cloud platform events that you need to consider monitoring are:

  • Changes in user permissions and roles
  • Any changes made to the instances, such as creation/deletion in the cloud infrastructure
  • Requests to data buckets containing sensitive data buckets by users who are accessing remotely. 
  • Unauthorized account or file access
  • Communication to external sources 
  • Alerts for the transformation of personally identifiable information
  1. Windows Events Logs

Event log from windows is critical for securing the network, troubleshooting issues and to retain events for compliance purposes. Windows events such as network connections, errors, network traffic, system behavior and unauthorized access are logged. A windows system can produce a large volume of log data on a daily basis. While managing such volume is difficult, log management software makes handling windows event logs easier. In a windows environment, there are three primary categories of event logs tracked, system logs, application logs and security logs. Logs from System Monitor, which is a driver, also logs to the Windows events log. System monitors and logs events such as files creation and modification; driver installations/ deletions, process creation, accessing raw disks, etc. Centralizing the windows events onto a windows server from which the SIEM/ log agent can read the logs is the recommended practice. The log management software chosen should be able to aggregate all windows logs into a common format, provide an alerting mechanism for network anomalies and offer visualization capabilities. observIQ offers simplified logging for windows events, check our user documentation for simple plug-in configuration for windows logs.

  1. Database Logs

Companies have apprehensions about logging database events because they worry that logging could inversely affect the server’s performance. Capturing events from databases and tables could be challenging considering the large number of databases in applications. If there are databases created by third party service providers, gaining access to log events in those databases would be a challenge due to access restrictions. To gain visibility into databases, having a good Database Activity Monitoring system in place helps. Since the DAM works similar to a firewall in its restrictive functionality, logging database events via the DAM makes it a more compliance oriented monitoring practice. Using a stored procedure to capture specific events and log with the accurate information about the event with the record ID. Stored procedures can be used to track administrator accesses, malicious and unauthorized access attempts, authorization failures, start and end event logs for servers, all schema related operations, requests for modifications to a large set of records in the database, etc. 

  1. Linux Logs

Logs all Linux events to get a timeline of system, application and operation system related events. Errors and issues in the desktop are logged in different locations. The location where the logs are written is configurable in most cases. All Linux logs are entered in the plain text format. /var/log is the directory to which the logs are saved. In Linux everything event has the potential to be logged, anything from package manager events to Apache servers can be logged. All logs except authorization related logs are logged to a syslog in the /var/log directory. The most critical Linux logs are Application logs, event logs, service logs and system logs.

  1. Infrastructure Device Logs

Infrastructure devices are the lifeline of the information transport architecture. Monitoring the routers, switches, and switches in a network gives an insight into the health and functioning of the network. Enriching the log data from infrastructure devices can give increased visibility into machine and user interaction. Logs from infrastructure are vital submissions in the compliance package. Infrastructure logs from highly distributed environments do not have a straightforward alerting mechanism in place. When there are network failures, only through a detailed log analysis can the DevOps triage and fix the issue. Sophisticated monitoring tools such as observIQ, have alerting mechanisms based on thresholds and metric indicators for issues. Logs from different infrastructure devices are output in various formats. In most cases these logs are unstructured data that are most suitable for batch processing through a tool instead of manual analysis. Most application developers are interested in logging configuration changes to infrastructure devices. Knowing the origination of configuration changes helps address any issues that may arise from misconfigurations. 

  1. Containerized Application Logs

Although containerized applications are a new and emerging form of application management, it is becoming increasingly business critical. This happens to be the most in-vogue source for log ingestion for businesses of all sizes. To effectively log a containerized application, it is necessary to collate the logs from the application, the host OS and Docker. We have written extensively about logging Kubernetes applications in this space and we will continue to document more use cases in the future. 

  1. WebServers Logs

Logging WebServer events can be tough but WebServer logs help businesses understand the end user’s interaction with the application. While in the past businesses ignored the need to log webServers, with increasing need to study user behavior logs from webservers such as Apache, NGINX are looked at with new interest. Webserver logs contain useful information such as:

  • User visits and application access logs
  • User login and duration for which the user is logged in
  • Page view count
  • User information such as browser used to access the application, version of the OS
  • Bots access to the application
  • HTTP requests
  1. Security Device Logs

As more businesses begin to adopt a more cloud based approach for application delivery, the devices that are at the edge of customer interaction are becoming extremely valuable. Security devices such as firewalls are experiencing a large spike in traffic loads with this shift to cloud infrastructure. Logs from security devices give details about network security and user activity. Not logging security devices is equivalent to not fixing the final piece in a puzzle. 

We outlined a generic set of log sources in this post. But the complexities of logging from a myriad of sources is a problem for most of our users, and observIQ is working at fixing that. We have over 60 integration with log sources at the time of this post and this list is growing rapidly. We work on these integrations based on popular requests. Feel free to pitch your log source request to our customer support team. In the next post, we would like you to join us in configuring these log sources to your observIQ account, that you can set it up for Free!

 

The post Log Management 101: Log Sources to Monitor appeared first on observIQ.



This post first appeared on ObservIQ Blog Posts | Log Management Made Simple, please read the originial post: here

Share the post

Log Management 101: Log Sources to Monitor

×

Subscribe to Observiq Blog Posts | Log Management Made Simple

Get updates delivered right to your inbox!

Thank you for your subscription

×