AWS Partner Network (APN) Blog
How to Collect, Monitor, and Process Logs and Metrics at Scale with Cognitive Insights
By Shashi Raina, Partner Solutions Architect at AWS
By Daniel Berman, Product Marketing Manager at Logz.io
Famed management thinker Peter Drucker is often quoted as saying, “You can’t manage what you can’t measure.” Tracking and analyzing data of a system provides metrics to measure, predict, and improve the underlining health of the system.
Logging data is the simplest act of collecting data for measurement and plays an important role in modern enterprises, as it provides a way to measure the health of hardware devices and software applications alike. Enterprises can be on-premises, on the Amazon Web Services (AWS) Cloud, or a combination of both. Log sources, meanwhile, can be network devices, operating systems, applications, or cloud services, to name a few.
When this log data is large in volume, high in velocity, or has lots of variety across formats, it poses challenges for data storage, processing, and enrichment. It’s not enough to just store the data; you need to react to it. In order to react to data, you need to visualize it and set up alerts to get notified if something of interest happens.
For an alert or notification to be effective, the system should have the ability to reduce the noise by providing unique events and contextualize errors or notifications that need real and immediate attention. The last thing you want is your operators being bombarded by false alerts and getting desensitized to critical errors when the time comes to act immediately.
In this post, we’ll explore the features of Logz.io—a unified machine data analytics platform that collects and processes logs and metrics, while also identifying critical events with contextual information for intelligently acting upon them.
What is Logz.io?
Logz.io is an AWS Partner Network (APN) Advanced Technology Partner with AWS Competencies in both Data & Analytics and DevOps. If you want to be successful in today’s complex IT environment, and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner is The Next Smart.
The AWS Competency Program verifies, validates, and vets top APN Partners that have demonstrated customer success and deep specialization in specific solution areas or segments.
Logz.io provides a software-as-a-service (SaaS) platform to ingest data in a secure and scalable fashion. The platform is tightly integrated with native AWS services like AWS Config, Amazon CloudWatch, AWS Lambda, Amazon API Gateway, and Amazon Elastic MapReduce (EMR).
Logz.io provides the ELK Stack—a popular open source log analysis platform—as a scalable and secure service for ingesting data from a multitude of sources. It does the heavy lifting of scaling the platform and frees end users from maintaining the hosting infrastructure. With ELS Stack, you are provided with a secure endpoint to ingest data and a portal to view and analyze it.
Support for Multiple Platforms
Logz.io provides integration support with a multitude of platforms that can act as a source for ingested data.
Figure 1 illustrates the various integration points, including AWS-specific ones. For a complete list, check out the Log Shipping tab after signing into the Logz.io interface.
Figure 1 – Integration points.
Getting Started
Once you have successfully signed up for Logz.io, log into the portal and take note of your Account Token under Settings, as shown in Figure 2. This is an auto-generated token that uniquely identifies each account and is used to segment ingested data for various tenants on the multi-tenant platform. You’ll need this token to successfully post your data via different integration points.
If for some reason this token is compromised, it can be changed via a support ticket. The Logz.io platform can detect a compromised token and take necessary preventive steps while also informing the user.
For the scope of this demo, we’ll use three integration points as the source of log data to be ingested into the platform.
Figure 2 – Account token.
1. Amazon CloudTrail
Enable Amazon CloudTrail in your AWS account and name the Amazon Simple Storage Service (Amazon S3) bucket that’s used to store the CloudTrail logs (example: ctshashi). This Amazon S3 bucket is then configured on the Logz.io portal.
Figure 3 – Configuring AWS CloudTrail on Amazon S3.
After saving the information with the appropriate secret key of the user that has access to the Amazon S3 bucket, the AuditTrail logs will start showing up in the Logz.io portal. Figure 4 shows the results of a query run on the AuditTrail logs, filtering an event of a Root User signing in using the console. The query syntax is displayed on the left top of the image.
Figure 4 – Root User signing in with the AWS Console.
2. VPC Flow Logs
VPC Flow Logs provide information on IP traffic going in and out of your Virtual Private Cloud (VPC), subnet, or at the Network Interface Card (NIC) level. This information helps in monitoring IP traffic and troubleshooting connectivity issues.
Using the AWS console, configure the VPC logs to deliver to a pre-defined Amazon S3 bucket (example vpclogs11) in a specific AWS Region. In our case, that region is us-east-1 (N. Virginia).
I then used the Logz.io portal to configure an Amazon S3 bucket for Log Type of vpcflow.
Figure 5 – Configuring VPC Flow Log on Logz.io.
Upon saving the configuration, I started to see my VPC logs in the Logz.io portal.
Figure 6 – VPC logs on the Logz.io portal.
3. Application Logs (IIS Logs)
I used AWS Elastic Beanstalk to spin up a sample .net application running on an IIS webserver. I then configured IIS logs to post data to Logz.io.
Next, on the Amazon EC2 instance running IIS:
- Download NXLog and install it.
- Replace the contents of nxlog.conf file with text below:
define ROOT C:\\Program Files (x86)\\nxlog
define ROOT_STRING C:\\Program Files (x86)\\nxlog
define CERTDIR %ROOT%\\cert
Moduledir %ROOT%\\modules
CacheDir %ROOT%\\data
Pidfile %ROOT%\\data\\nxlog.pid
SpoolDir %ROOT%\\data
LogFile %ROOT%\\data\\nxlog.log
<Extension charconv>
Module xm_charconv
AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
</Extension>
#create one for each application
<Input IIS_Site1>
Module im_file
# Location of the log file that will be pushed to Logz.io platform
File “C:\\inetpub\\logs\\LogFiles\\W3SVC1\\u_ex*.log”
SavePos TRUE
Exec if $raw_event =~ /^#/ drop();
Exec convert_fields(“AUTO”, “utf-8”);
# Account Token as in Figure 2
Exec $raw_event = ‘[Insert Account Token][type=iis]’ + $raw_event;
</Input>
<Output out>
Module om_tcp
Host listener.logz.io
Port 8010
</Output>
<Route IIS>
Path IIS_Site1 => out
</Route>
- Go to the Services console on your Windows EC2 machine, and restart the NXLog service.
Within seconds, you’ll start to see the IIS logs on the Logz.io portal.
Figure 7 – IIS logs on Logz.io.
At this point, all three sources are pushing their respective data into Logz.io. You can create queries to view the consolidated data from a single pane of glass. You can also save these queries and reuse them as views.
Figure 8 – Saved views for the three use cases.
Using different views, I can see all the data in a single pane of glass, and also set up alerts for these views.
Figure 9 – Alert configured to detect Root User signing in using the AWS Console.
Insights
In addition to giving users a consolidated view of all the enterprise logs in single place, Logz.io provides data enrichment via a feature called Insights. There are two types of insights: Cognitive Insights and Application Insights.
- Cognitive Insights uses machine learning and crowdsourcing to identify correlations between the log data and discussions happening in technical forums, and reveals these as events to examine together with contextual data.
- Application Insights identifies new and unique application errors or exceptions thrown by applications and allows you to correlate these with changes in an environment.
In Figure 10, a Logz.io Cognitive Insights reports on a TCP connection error. Right before this error occurred, a marker tells us that someone updated a Puppet certificate. This provides the context needed to understand the root cause.
Figure 10 – Insights dashboard.
Opening the insight, we can see a list of discussions on GitHub that we can read to understand how to quickly fix the issue.
Figure 11 – Cognitive Insights.
Summary
Logz.io provides a secure and scalable platform to ingest data from various sources across different formats. This data can be sourced from enterprise that is on-premises, on the AWS Cloud, or a combination of both. It could be from a hardware device or a software application.
The Logz.io platform allows users to view this log data from disparate sources and formats in single pane of glass and set up alerts, while also enabling intelligent insights from the data that enable efficient troubleshooting.
Learn more and start your 14-day trial. You can also check out these additional resources:
- Logz.io documentation
- Logz.io on AWS Marketplace
- Blog post: VPC Flow Log analysis
- Blog post: AWS CloudTrail ELK Stack
.
Logz.io – APN Partner Spotlight
Logz.io is an AWS Competency Partner. They are an intelligent log analysis platform combining ELK as a cloud service with machine learning to derive new insights from machine data.
Contact Logz.io | Solution Overview | Buy on Marketplace
*Already worked with Logz.io? Rate this Partner
*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.