2

I've gotten Serilog writing to a rolling file okay and started implementing a connection to a Docker-based ElasticSearch instance. However, I am having connection issues.

To start, my application is running locally and there's two big take-aways:

  • I can reach the Elastic service from my machine elastic server
  • Kibana is showing that my service created an index with the name I specified Kibana dashboard

However, when I look at the output of Serilog.Debugging.SelfLog.Enable(msg => Debug.WriteLine(msg));, I see there's some sort of connection issue with Elastic:

Caught exception while preforming bulk operation to Elasticsearch: Elasticsearch.Net.ElasticsearchClientException: Maximum timeout reached while retrying request

debug output

My appsettings.json (yes, I am configuring through json and not code):

"Serilog": {
    "Using": ["Serilog.Sinks.Console", "Serilog.Sinks.File", "Serilog.Sinks.ElasticSearch"],
    "MinimumLevel": {
      "Default": "Debug",
      "Override": {
        "Microsoft": "Warning",
        "System": "Warning"
      }
    },
    "Enrich": ["FromLogContext", "WithExceptionDetails"],
    "WriteTo": [
      { "Name": "Console" },
      { "Name": "Debug" },
      {
        "Name": "File",
        "Args": {
          "path": "%LogDir%\\someserverpath.xyz.com\\log-.txt",
          "rollingInterval": "Day",
          "shared": true
        }
      },
      {
        "Name":  "Elasticsearch",
        "Args": {
          "nodeUris": "http://sxdockertst1:9200",
          "indexFormat": "imaging4cast-index-{0:yyyy.MM}",
          "emitEventFailure": "WriteToSelfLog",
          "autoRegisterTemplate": true
        } 
      }
    ],
    "Properties": {
      "Application":  "xyz.yyy.Imaging4CastApi" 
    } 
  },

I feel like it's getting rudimentary connection because, otherwise, how else would the index have been created? There's no auth on the elastic server, either. But actually pushing a log message doesn't appear to be working...

I'm at a loss here...

1
  • I updated to use a fully-qualified URI in nodeUris but still gets a timeout, FYI. Commented Feb 22, 2019 at 16:58

1 Answer 1

2

Okay, so I figured this out.

The reason it was not posting messages was because the hard drive (technically, var on the docker host) was full. I had to clean up about 15GB of logs and messages in var.

Then, I needed to run this command to get Elastic out of Read-only mode (I quickly encountered an issue where I couldn't created indexes in Kibana for dashboarding and it was because Elastic went into RO mode):

curl -XPUT -H "Content-Type: application/json" http://sxdockertst1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Sign up to request clarification or add additional context in comments.

4 Comments

what happens intermediately if the connection to ELK is lost? will you lose your logs as well? or will it store locally & retry to send when ELK connection is up?
I think it depends on your logging implementation. I believe Serilog does have a buffer to store logs (which can also be why you might not see records in Kibana immediately after you know a log was triggered, because the buffer hasn’t flushed to Elastic yet). However, I suspect logs would be lost if elastic was in RO mode for any significant period of time. That’s why I always have a backup File sink setup.
Ok. so u have a file sink + ELK sink. That was mu initial idea too but now I thinking of using either logstash | fluentd and not the the code.
Yep, and I think that's actually a more popular model: pulling via logstash rather than pushing from the code.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.