How do I connect to Amazon Elasticsearch Service using Filebeat and Logstash on Amazon Linux?

Last updated: 2021-03-18

I'm trying to connect to an Amazon Elasticsearch Service (Amazon ES) cluster using Logstash on Amazon Linux. However, I keep getting an error. How do I resolve this?

Short description

To connect to Amazon ES using Logstash, perform the following steps:

1.    Set up your security ports (such as port 443) to forward logs to Amazon ES.

2.    Update your FileBeat, Logstash, and Elasticsearch configurations.

3.    Install FileBeats on your source Amazon Elastic Compute Cloud (Amazon EC2) instance. Make sure that you've correctly installed and configured your YAML and CONF file.

4.    Install Logstash on a separate Amazon EC2 instance from which the logs will be sent.

If you haven't correctly set up or configured Logstash, you'll receive one of these errors: 401 Authorization error, 403 Forbidden error, or x-pack installation error.

Resolution

Set up your security ports

Make sure to set up your security ports so that your Amazon Elastic Compute Cloud (Amazon EC2) instance can forward logs to Amazon ES.

To set up your security ports to forward logs from Logstash, perform the following steps:

1.    Create an Amazon EC2 instance where you've installed Apache and Filebeat. The Amazon EC2 instance must be able to forward logs from Logstash to Amazon ES.

2.    Make sure that your EC2 instances reside in the same security group as your virtual private cloud (VPC) in Amazon ES.

3.    Make sure that the following ports are open in your security group: 80, 443, and 5044. These ports must be open so that you can send data between Logstash and Amazon ES.

Update your FileBeat, Logstash, and Elasticsearch configurations

Make sure that the same version number is being used for the following:

  • FileBeat version x.x OSS
  • Logstash version x.x OSS
  • Elasticsearch version x.x

Note: Amazon ES runs best when you use OSS versions of Filebeat and Logstash. It's also a best practice to use the same version number for Filebeat, Logstash, and Elasticsearch.

To make sure the updated configurations are all in sync, download RPMs to each (separate) instance. To prevent a single point of failure, it's important to avoid running RPM installations on the same instance. Then, verify that the downloaded files are available.

Install Filebeat on the source Amazon EC2 instance

1.    Download the RPM for the desired version of Filebeats:

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-6.7.0-x86_64.rpm

2.    Install the Filebeats RPM file:

rpm -ivh filebeat-oss-6.7.0-x86_64.rpm

Install Logstash on a separate Amazon EC2 instance from which the logs will be sent

1.    Download the RPM file of the desired Logstash version:

wget https://artifacts.elastic.co/downloads/logstash/logstash-oss-6.7.0.rpm

This example uses version 6.7 to match the version number of Elasticsearch and Filebeat.

2.    Install the RPM file that you downloaded for Logstash using the rpm command:

rpm -ivh logstash-oss-6.7.0.rpm

3.    Install Java or OpenJDK on your Amazon EC2 instance:

yum install java-1.8.0-*

Note: Logstash requires Java to run. In this example, we're using Java version 8 (Open JDK 1.8), which is supported by all versions of Logstash. For more information about the supported versions of Java and Logstash, see the Elasticsearch support matrix on the Elasticsearch website.

4.    Verify the configuration files by checking the “/etc/filebeat” and “/etc/logstash” directories.

5.    For Filebeat, update the output to either Logstash or Elasticsearch, and specify that logs must be sent. Then, start your service.

Note: If you try to upload templates to Kibana with Filebeat, your upload fails. Filebeat assumes that your cluster has x-pack plugin support.

6.    Update your Filebeat YAML configuration file to send Apache access logs to Logstash.

For example:

filebeat.inputs:
- type: log
   -
/var/log/httpd/access_log
 
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled:
false
 
setup.template.settings:
 
index.number_of_shards: 1
  index.codec:
best_compression
 
#output.elasticsearch:
#hosts:
["vpc-examplestack-5crrfyysa2ratcl3ursmung33q.us-east-1.es.amazonaws.com:443"]
#protocol: "https"
 
output.logstash:
  # The Logstash hosts
  hosts:
[“Logstash-EC2-InstanceIP:5044"]
 
setup.ilm.enabled: false 
ilm.enabled: false

7.    Make sure that your Logstash configuration file can access Filebeats on Port 5044. This port access allows Logstash to forward requests to your Amazon ES VPC endpoint.

For example:

input {
  beats {
    port => 5044
  }
}
 
output {
  elasticsearch {
    hosts => ["https://vpc-examplestack-5crrfyysa2ratcl3ursmung33q.us-east-1.es.amazonaws.com:443"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    ilm_enabled => false
  }
}

8.    Start the Filebeat and Logstash services with the following commands on each instance.

Filebeat:

systemctl start filbeat (service filebeat start)

Logstash:

cp /etc/logstash/logstash.conf /etc/logstash/conf.d/
systemctl start logstash (service logstash start)

9.    Run a cat indices API call to your Amazon ES domain to confirm that the Filebeat logs are being sent. If your logs are successfully sent, you'll receive the following response:

curl -XGET https://vpc-examplestack-5crrfyysa2ratcl3ursmung33q.us-east-1.es.amazonaws.com/_cat/indices
green open filebeat-7.1.0-2020.02.12 f97c4WnuQ-CtsAJJaJHUlg
1 1 1511515 0 249.7mb 124.7mb
green open .kibana_1                 Ioco6fUoSCGkaOvHNCL39g 1
1       1 0   7.4kb  
3.7kb

By default, the Filebeat indices rotate daily. Here's an example output of a Filebeat index:

curl -XGET https://vpc-examplestack-5crrfyysa2ratcl3ursmung33q.us-east-1.es.amazonaws.com/_cat/indices
green open filebeat-7.1.0-2020.02.12 f97c4WnuQ-CtsAJJaJHUlg
1 1 1511515 0 249.7mb 124.7mb
green open .kibana_1                 Ioco6fUoSCGkaOvHNCL39g 1
1       1 0   7.4kb  
3.7kb
green open filebeat-7.1.0-2020.02.13 4i8W0smlRGGFcQOaDMxonA
1 1      89 0 207.1kb 118.1kb

If you successfully configure Elasticsearch, Logstash, and Kibana (ELK) with Amazon EC2 Linux, your pipeline looks like this:

Filbeat > Logstash > AWS Elasticsearch/Kibana

401 Unauthorized error

A 401 Unauthorized error from Logstash indicates that your Amazon ES domain is protected by fine-grained access control (FGAC) or Amazon Cognito. FGAC requires signed requests by a user or role, which must be defined in the domain's access policy. If you receive a 401 Unauthorized error, make sure that you've properly enabled FGAC in your Logstash configuration file.

For example:

output {
  elasticsearch {
    hosts => ["https://vpc-examplestack-5crrfyysa2ratcl3ursmung33q.us-east-1.es.amazonaws.com:443"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    ilm_enabled => false
   user => "elastic"
    password => "changeme"
  }
}

403 Forbidden error

When you use and configure Logstash to send data to Amazon ES, you might receive a 403 Forbidden error. This error occurs when Logstash doesn't have proper access. To resolve this issue, make sure to sign your requests to Amazon ES using AWS Identity Access Management (IAM) credentials.

To sign Amazon ES requests using Logstash, follow these steps:

1.    Install the Logstash plugin for Amazon ES:

bin/logstash-plugin install logstash-output-amazon_es

2.     Attach an IAM role to the Amazon EC2 instance, like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "es:ESHttp*"
            ],
            "Resource": "[Amazon-ES-Domain-ARN]"
        }
    ]
}

3.    Update your Amazon ES configuration settings to use the "amazon_es" Logstash plugin

output {
  amazon_es {
    hosts => ["domain-endpoint"]
    ssl => true
    region => "us-east-1"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Logstash x-pack installation error

If you encounter errors with x-pack when you start up Logstash, manually disable the x-pack plugin from your registry file.

To manually disable the x-pack plugin, follow these steps:

1.    Open the following file:

/usr/share/logstash/logstash-core/lib/logstash/plugins/registry.rb

2.    Find load_xpack and comment in-line:

“load_xpack unless LogStash::OSS”
>> “#load_xpack unless LogStash::OSS”

Note that your configuration files show that the Index Life Management (ILM) settings (ilm.enabled and ilm_enabled) are both set to "false". Disabling these ILM settings in your configuration files will eliminate startup errors for the x-pack plugin.


Did this article help?


Do you need billing or technical support?