3

So I spinned up a 2 instance Amazon Elasticsearch cluster.

I have installed the logstash-output-amazon_es plugin. This is my logstash configuration file :

input {
    file {
        path => "/Users/user/Desktop/user/logs/*"
    }
}

filter {
  grok {
    match => {
      "message" => '%{COMMONAPACHELOG} %{QS}%{QS}'
    }
  }

  date {
    match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
    locale => en
  }

  useragent {
    source => "agent"
    target => "useragent"
  }
}

output {
    amazon_es {
        hosts => ["foo.us-east-1.es.amazonaws.com"]
        region => "us-east-1"
        index => "apache_elk_example"
        template => "./apache_template.json"
        template_name => "apache_elk_example"
        template_overwrite => true
    }
}

Now I am running this from my terminal:

/usr/local/opt/logstash/bin/logstash -f apache_logstash.conf

I get the error:

Failed to install template: undefined method `credentials' for nil:NilClass {:level=>:error}

I think I have got something completely wrong. Basically I just want to feed some dummy log inputs to my amazon elasticsearch cluster through logstash. How should I proceed?

Edit Storage type is Instance and access policy is set to accessible to all.

Edit

output {
    elasticsearch {
        hosts => ["foo.us-east-1.es.amazonaws.com"]
        ssl => true
    index => "apache_elk_example"
         template => "./apache_template.json"
          template_name => "apache_elk_example"
          template_overwrite => true

    }
}
0

3 Answers 3

3

I also faced same problem, and I solved it by mentioning port after the hostname. This occurs because hostname hosts => ["foo.us-east-1.es.amazonaws.com"] points to foo.us-east-1.es.amazonaws.com:9200 which is not the default port in the case of aws elasticsearch. So by changing hostname to foo.us-east-1.es.amazonaws.com:80 solves the problem.

Sign up to request clarification or add additional context in comments.

Comments

2

You need to provide the following two parameters:

  • aws_access_key_id and
  • aws_secret_access_key

Even though they are described as optional parameters, there is one comment in the code that makes it clear.

aws_access_key_id and aws_secret_access_key are currently needed for this >plugin to work right. Subsequent versions will have the credential resolution logic as follows:

3 Comments

That's probably not a logstash issue but more likely a wrong security setting on the AWS IAM side.
Yeah I am trying to figure that out. Meanwhile I thought of using just the elasticsearch output plugin ...with that I am getting: connect timed out {:class=>"Manticore::ConnectTimeout", :level=>:error} I have added code for that as an edit in case you could suggest something.
I am going to update the comment. This has been fixed and we have lot of customers resolving credentials using instance profiles
2

I was able to run logstash together with AWS Elasticsearch without the AccessKeys, I configured the policie in the ES service.

It worked without the Keys if you start the logstash manually, if you start logstash as a service the plugin doenst work.

https://github.com/awslabs/logstash-output-amazon_es/issues/34

2 Comments

Not a great idea as that means that your elasticsearch instance is publicly available
agree that is not the best solution :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.