AWS Database Blog

Configuring and authoring Kibana dashboards

September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details.

Kibana is an open-source data visualization and exploration tool. It can be used for log and time-series analytics, application monitoring, and operational intelligence use cases. Kibana also offers powerful, easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in geospatial support.

Kibana is tightly integrated with Amazon Elasticsearch Service (Amazon ES), a search and analytics engine, to simplify the analysis of large volumes of data. With its simple, browser-based interface, Amazon ES enables you to create and share dynamic dashboards quickly.

This post demonstrates how to create visualizations and a dashboard. You will use Amazon Relational Database Service (RDS) and Amazon Aurora PostgreSQL logs stored in Amazon ES using Kibana. This post is part two of a two-part series. For part one, refer to Analyze PostgreSQL logs with Amazon Elasticsearch Service.

The following are the high-level steps:

  1. Create index patterns
  2. Aggregate text field
  3. Explore discovery feature
  4. Explore visualization feature
  5. Create visualizations
  6. Create Dashboards

Prerequisites

  • A valid AWS account with access to the appropriate AWS services.
  • An Aurora/RDS PostgreSQL database. For more information, see Amazon RDS.
  • Confirm that the database logs are generated with the required content. Check the parameter group and make sure that the following parameters have the correct value:
    • log_lock_waits=1 (true)
    • log_min_duration_statement=5000 (log queries that take more than 5 secs)
    • log_min_messages=warning
    • log_min_error_statement=error
    • log_statement=ddl
    • log_checkpoints=1 (true)
    • log_connections=1 (true)
    • log_disconnections=1 (true)

Creating index patterns

To visualize and explore data in Kibana, you must first create index patterns. An index pattern points Kibana to the Amazon ES indexes containing the data that you want to explore. For example, to explore the data from a particular database in August 2019, create the index pattern: cwl-<dbname>-2019.08*. While setting up Amazon CloudWatch to Amazon ES streaming, configure the indexes in the format: cwl-<dbname>-yyyy.MM.dd.

To create index patterns, complete the following steps:

  1. Open the Kibana application using the URL from Amazon ES Domain Overview page.
  2. On the navigation panel, choose the gear icon to open the Management page.
  3. Choose Index Patterns.
  4. Choose Create index pattern.
  5. For Index pattern, enter cwl with an asterisk wild card (cwl-*) as your default index pattern.
  6. For Time filter, choose @timestamp.
  7. Choose Create index pattern.
    This index pattern is set as default automatically. If not, choose the star icon, as shown in the screenshot preview below.
    Screenshot showing the *cwl-* index pattern and selecting the star in the right hand side to set as the default index.
  8. Repeat these steps to create index patterns for each database(cwl-<databaseName>-*). This is a best practice to improve query performance.

Aggregating on a text field

To see when an error was logged at a particular period or which queries were logged at a particular time, allow aggregation on the message. Aggregations on a text field are not provided by default, so modify that setting. For more information, see Getting Started with Amazon Elasticsearch: Filter Aggregations in Kibana.

This setting could consume more memory based on the size of the data. To make the message field aggregatable, complete the following steps:

  1. On the navigation pane, choose Dev Tools.
  2. Enter the following command:

    PUT cwl-*/_mapping/?include_type_name=false
    {
      "properties": {
        "message": {
          "type": "text",
          "fielddata": true
        }
      }
    }
  3. Now, set the correct shard value. To allow one shard per index, use the following code:
    PUT /_template/shards_per_index
    {
        "template": "cwl-*",
        "settings": {
            "index" : {
                "number_of_shards": 1,
                "number_of_replicas": 1
            }
        }
    }

 For more information, see Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need?

Explore the discovery feature

You can explore your data with Kibana’s data discovery functions. You now have access to every event from all the databases. Use the index pattern cwl-*. It is marked as default while creating the index patterns. You can view data by submitting search queries and filter results using Lucene query syntax. You can also see the number of documents that match the search query and get field value statistics.

To perform improvised searches across multiple logs and view from multiple databases, use index pattern cwl-*.

To get data from a single database, choose the correct index pattern instead of cwl-*. Find this by searching the database name in the dropdown menu in the top-left corner of the Discover pane. This helps to boost the query’s performance by invoking only the indexes of that particular database. The following image shows the populated data.

Screenshot showing populated data from selecting a specific database.

The Selected Fields menu shows the fields you configured while setting up the CloudWatch-to-Amazon ES log streaming (date, time, misc, message). The number of hits is displayed on the top-left corner of the page.

To display the select messages and omit the queries that are not a copy SQL command, use the Lucene query syntax: <databaseName> AND message:"select" -message:"copy"

The search commands are as straightforward as performing an online search. For example, you could also use <databaseName> AND select –copy without using the context field (message).

Explore the visualization feature

To create visualizations based on the data in your Amazon ES indexes, use the visualization function. Kibana visualizations are based on Amazon ES queries. You can create charts that show trends, spikes, and dips by using a series of Amazon ES aggregations to extract and process data.  Create visualizations from a search saved from the discovery function or start with a new search query.

To create a new visualization, complete the following steps:

  1. On the navigation pane, choose Visualization.
  2. Choose the add icon as shown in the screenshot preview below, represented by a green square with a + sign.

Screenshot showing the Visualization page and the Add icon represented by a green square with a + sign

The following screenshot shows the various visualization types to choose.

Screenshot preview of visualization option.

Creating visualizations for the RDS PostgreSQL DB logs live dashboard

This post creates the following visualizations and adds them to a dashboard:

  • A visual builder timeline graph of long-running queries.
  • A data table of the locks data.
  • A pie chart of the percentage of error, fatal, and warning messages.
  • A data table of the error, fatal, and warning messages aggregated by time.
  • A data table of the long-running queries aggregated by time.
  • A visual builder bar graph of the checkpoints.
  • A visual builder line graph of the connections and disconnections.
  • A data table of the DDL statements aggregated by time.

This is not an exhaustive list of visualizations, but helps you understand how you can use each of them.

Timeline graph of long-running queries

To create a long-running queries timeline graph, complete the following steps:

  1. On the Visualizations pane, choose the add icon (represented by a + sign with a circle around it).
  2. Choose Visual Builder.
  3. For each filter (SELECT, INSERT, UPDATE, and DELETE), choose the add icon.
  4. For Group by, choose Filter.
  5. For Query string, enter the same string as the filter type (select, insert, update, or delete).
    The following screenshot demonstrates steps 3–5.
    Screenshot preview demonstrating steps 3-5.
  6. Choose the Panel options tab.
  7. For Index pattern, enter the appropriate index pattern (cwl-<dbName>*).
  8. Choose No under Drop last bucket?
  9. For Panel filter, enter the database name.
    The following screenshot details steps 6–9.
    Screenshot demonstrating steps 6-9.
  10. Choose Save to save the visualization with an appropriate title (for example, “<dbName> Long-Running Queries Graph”). The output of the visualization is displayed as shown in the image below.
    Preview of the Long Running Queries visualization

Data table of locks data

To create a data table of locks data, complete the following steps:

  1. On the Discover page, select the appropriate index pattern (cwl-<dbName>*).
  2. Add message as the selected field from the list of Available fields by hovering on the message and choosing the Add button. The screenshot below previews the message field selected.
    Screenshot preview of the message field selected
  3. For Filters, enter the Lucene query: lock OR exclusivelock OR accesssharelock, as shown in the screenshot below.
    Preview of the Filters page with the query entered.
  4. Choose the Refresh
  5. Choose Save to save the search with an appropriate title (for example, <dbname> Locks Info). The output of the visualization is displayed as shown in the image below.
    Preview of the visualization output.

Pie chart of error, fatal, and warning messages

To create an error, fatal, and warning messages pie chart, complete the following steps:

  1. On the Visualizations pane, choose the add icon.
  2. Choose Pie chart.
  3. From the Index pattern list, choose the appropriate index pattern (cwl-<dbName>*).
  4. Choose Split Slices under Buckets category.
  5. For Aggregation, choose Filters.
  6. Add the following filters and choose Add Filter for each:
    • "<dbName>" AND "error"
    • "<dbName>" AND "fatal"
    • "<dbName>" AND "warning"
      The following screenshot shows the filters added under the Buckets section.
      Screenshot of the filters added under the Buckets section.
  7. Choose Save to save the visualization with an appropriate title (for example, <dbName> Error, Fatal, and Warning Chart). The output of the visualization is displayed as shown in the image below.
    Visualization of the Errors/Fatal/Warn pie chart

Data table of the error, fatal, and warning messages

To create an error, fatal, and warning messages data table aggregated by time, complete the following steps:

  1. On the Visualizations pane, choose the add icon.
  2. Choose Data Table.
  3. From the Index pattern list, choose the appropriate index pattern (cwl-*<dbName>).
  4. Choose Add metrics.
  5. Under Metric, for Aggregation, choose Top Hit.
  6. For Field, choose message.
    The following screenshot preview details steps 4–6.
    Screenshot detailing steps 4 - 6.
  7. Under Buckets, choose Split Rows.
  8. For Aggregation, choose Date Histogram as shown in the screenshot preview below.
    Screenshot detailing steps 7-8.
  9. Choose the play icon as shown below.
    Screenshot preview of the play button.
  10. For Filters, add the query string in Lucene syntax “<database name>” AND (“error” OR “fatal” OR “warning”), as shown in the screenshot preview below.
    Screenshot of the Filters field with <database name>" AND ("error" OR "fatal" OR "warning") entered.
  11. Choose the Refresh button.
  12. Choose Save to save the visualization with an appropriate title (for example, <dbname> Error, Fatal, and Warning Messages). The output of the visualization is displayed as shown in the image below.
    Screenshot of the Error Messages visualization

Data table of long-running queries

To create a long-running queries data table aggregated by time, follow the procedure as with creating the data table for messages, with the following changes:

  • Replace the query string with <database name> AND ((“duration” AND “insert”) OR (“duration” AND “select”) OR (“duration” AND “delete”) OR (“duration” AND “update”)).
  • Choose Save to save the visualization with an appropriate title (for example, <dbname> Long-Running Queries). The output of the visualization is displayed as shown in the image below.

Screenshot preview of the LongRunning Queries visualization.

Graph of checkpoints

To create a checkpoints graph, complete the following steps:

  1. On the Visualizations pane, choose the add icon.
  2. Choose Visual Builder.
  3. For Group by, choose Filters.
  4. In the Filters fields, enter “checkpoint complete” and “Checkpoints”, as shown in the screenshot preview below.
    Screenshot of the Filters fields with “checkpoint complete” and “Checkpoints" entered.
  5. Choose the Options tab.
  6. Add the appropriate index pattern (cwl-*<dbName>) in the Index pattern text box.
  7. Choose No under Drop last bucket?
  8. For Panel filter, enter the database name.
  9. For Chart type, choose Bar.
  10. Choose Save to save the visualization with an appropriate title (for example, <dbName> Checkpoint Graph).  The output of the visualization is displayed as shown in the image below.
    Screenshot of the Checkpoints visualization

Graph of connections and disconnections

To create a graph of connections and disconnections, complete the following steps:

  1. On the Visualizations pane, choose the add icon.
  2. Choose Visual builder.
  3. For each filter (Connection and Disconnection), choose the add icon.
  4. For Group by, choose Filter.
  5. For Query string, enter either “connection authorized” or disconnection, depending on the filter title.
    The following screenshot preview details these steps.
    Screenshot detailing steps 4 and 5.
  6. Choose the Options tab.
  7. Add the appropriate index pattern (cwl-<dbName>*) in the Index pattern text box.
  8. Choose No under Drop last bucket?
  9. For Panel filter, enter the database name.
  10. Choose Save to save the visualization with an appropriate name (for example, <dbName> Connections and Disconnections Graph). The output of the visualization is displayed as shown in the image below.
    Preview of the Connections & Disconnections visualization.

Data table of DDL statements

To create a data table of DDL statements aggregated by time, follow the procedure for creating the data table for messages, with the following changes:

Replace the query string with “<database name>AND (alter OR drop OR create OR copy OR grant).

  1. Choose Save to save the visualization with an appropriate name (for example, “<dbname> DDL Statements). The output of the visualization is displayed as shown in the image below.
    Preview of the DDL Statements visualization

Dashboard

The dashboard displays a collection of visualizations and searches. You can arrange, resize, and edit dashboard content. You can also save and share the dashboard. To configure the dashboard, complete the following steps:

  1. From the navigation pane, choose Dashboard.
  2. Choose the add icon.
  3. Choose each visualization in the list to add it to the dashboard panel. To add the saved visualizations from your discovery search, use the Saved Search tab, as shown in the screenshot below.
    Screenshot of the Saved Search selection in the Add Panels screen.The visualizations and searches in a dashboard are stored in panels that you can move, resize, edit, and delete. To start editing, click Edit in the menu bar.
    – To move a panel, click and hold the panel header and drag to the new location.
    – To resize a panel, click the resize control on the lower right and drag to the new dimensions.
  4. Adjust the panels as required.
  5. Choose Save to save the dashboard with the database name as the title.
  6. Choose Store time with dashboard and Confirm Save.
    The following screenshot previews the entire dashboard page with each created visualization.
    Screenshot of the entire dashboard with created visualizations.

Summary

This post demonstrated how visualizations help you understand data logs efficiently and how the Discover feature lets you look at the formatted data stored in Amazon ES. Furthermore, it showed how dashboards help you look at various visualizations on a single screen.

With this understanding of how to create visualizations and dashboards from the PostgreSQL logs data, you can easily try out other charts. This post is part two of a two-part series. For part one, refer to Analyze PostgreSQL logs with Amazon Elasticsearch Service. AWS welcomes feedback, so please leave your comments and questions below.

 


About the Author

 

Marcel George is a Consultant with Amazon Web Services. He works with customers to build scalable, highly available, and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to Amazon RDS and Aurora PostgreSQL.