2

I'm currently getting the repeating error when looking at the docker logs for my logstash 6.5.4 container

[2019-02-18T17:12:17,098][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.02.16", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x2cb19039>], :response=>{"index"=>{"_index"=>"logstash-2019.02.16", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [_default_]: No field type matched on [float], possible values are [object, string, long, double, boolean, date, binary]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"No field type matched on [float], possible values are [object, string, long, double, boolean, date, binary]"}}}}}

Here is my json template:

    {
  "template": "logstash-*",
  "order": 1, 
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 1
  },
  "mappings": {
    "_default_": {
      "properties": {
        "time": {
          "type": "date",
          "format": "basic_time_no_millis"
        },
        "before": {
          "type": "date",
          "format": "strict_date_time"
        },
        "after": {
          "type": "date",
          "format": "strict_date_time"
        },
        "logsource": {
          "type": "ip"
        }
      }
    } 
  }
}

and here is my logstash config

input {
  redis {
    host => "${REDIS_0_HOST}"
    port => "${REDIS_0_PORT}"
    data_type => "list"
    key => "logstash"
  }
}
input {
  redis {
    host => "${REDIS_1_HOST}"
    port => "${REDIS_1_PORT}"
    data_type => "list"
    key => "logstash"
  }
}

filter {

  # if we were successful parsing a message from the raw log, let's dive deeper into the message and assign more fields 
  if [message] {

    # catch gelatin lib output on startup in containers and drop them
    if "20500017" in [message] { drop { } }
    if "2050001c" in [message] { drop { } }

    # remove trailing whitespace from message field
    mutate {
      strip => ["message"]
    } 

    # handle message repeated X times messages 
    grok {
      match => ["message", "message repeated %{NUMBER:repeat_count} times: \[ %{GREEDYDATA:message}\]"]
      overwrite => [ "message" ]
      tag_on_failure => [ ]
    }

    # handle message fields that already have structured json content
    if [program] == "austin-perf" { 
      json {
        source => "message"
        remove_field => ["message"]
      }
    } else { 
      grok {
        break_on_match => true
        patterns_dir => ["/usr/share/logstash/config/patterns"]
        match => [ 
          "message", "%{OBLOG_REVIVE_DATE}",
          "message", "%{OBLOG_REVIVE}",
          "message", "%{OBLOG_DATE}",
          "message", "%{OBLOG}",
          "message", "%{WORD}, \[%{TIMESTAMP_ISO8601} #%{NUMBER}\]  ?%{WORD:level} -- : %{GREEDYDATA:kvpairs}", # ruby app logs
          "message", "%{USERNAME:level}: ?%{PATH:file} %{NUMBER:line_num} %{GREEDYDATA:kvpairs}",
          "message", "%{USERNAME:level}: ?%{GREEDYDATA:kvpairs}",
          "message", "%{URIPATH:file}:%{POSINT:line_num}" #ruby app exceptions
        ]
      }

      if "\." not in [kvpairs] {
        kv {
          source => "kvpairs"
          include_keys => [
            "pulse_git_events",
            "pulse_trending_count",
            "pulse_news_count",
            "kafka_records",
            "repeat_count",
            "used_memory",
            "new_kafka_articles",
            "wcs_training_time",
            "rokerbot_event",
            "health_check",
            "rokerbot_bot_utterance",
            "rokerbot_user_utterance",
            "Date_Conn_Time",
            "Date_Query_Time",
            "Date_Parse_Time",
            "News_Conn_Time",
            "News_Query_Time",
            "NEWS_FAIL_TIME",
            "writing_image",
            "timed_app",
            "ran_for",
            "app_name",
            "klocker_app_name",
            "memory_used",
            "cpu_usage",
            "rss_mem",
            "vms_mem",
            "shared_mem",
            "uss_mem",
            "pss_mem",
            "text_mem",
            "data_mem",
            "total_gpu_mem",
            "used_gpu_mem",
            "free_gpu_mem"
          ] 
        }
      }

      prune {
        blacklist_names => ["%{URI}"]
      }
    }

    if [file] and [line_num] { 
      mutate {
        add_field => {
          "test_unique" => "%{file}:%{line_num}"
        }
      }
    }
  }

  mutate {
    convert => {
      "pulse_git_events" => "integer"
      "pulse_trending_count" => "integer"
      "pulse_news_count" => "integer"
      "kafka_records" => "integer"
      "repeat_count" => "integer"
      "used_memory" => "integer"
      "new_kafka_articles" => "integer"
      "wcs_training_time" => "integer"
      "ran_for" => "integer"
      "Date_Conn_Time" => "integer"
      "Date_Query_Time" => "integer"
      "Date_Parse_Time" => "integer"
      "News_Conn_Time" => "integer"
      "News_Query_Time" => "integer"
      "NEWS_FAIL_TIME" => "integer"
      "memory_used" => "integer"
      "cpu_usage" => "float"
      "rss_mem" => "integer"
      "vms_mem" => "integer"
      "shared_mem" => "integer"
      "uss_mem" => "integer"
      "pss_mem" => "integer"
      "text_mem" => "integer"
      "data_mem" => "integer"
      "total_gpu_mem" => "integer"
      "used_gpu_mem" => "integer"
      "free_gpu_mem" => "integer"
    }

    lowercase => "level" 
    remove_field => [ "timestamp", "kvpairs", "type", "_type" ]

    add_field => {
      "time" => "%{+HHmmssZ}"
      "weekday" => "%{+EEE}"
    }
  }
}

output {
  elasticsearch {
    hosts => ["${ES_DATA_0}","${ES_DATA_1}"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

Under this current config it would seem the float value under cpu usage is causing the issue, but logstash config doesn't support double values under the mutate filter. This is an updated logstash container from what I believe was 5.1.x.

4 Answers 4

1

There was an old existing template that ES was looking at instead of mine. Deleting it solved the problem

Sign up to request clarification or add additional context in comments.

1 Comment

Can you collaborate on where these templates are located?
0

It seems you may have to extend the template, for example by adding a "match_mapping_type" for floats.
Check this related answer as well.

2 Comments

There was an old existing template that ES was looking at instead of mine. Deleting it solved the problem
I believe it's ok to post your solution as an answer so you can go ahead and do that instead of just commenting on my answer.
0

Follow above answers, then once u modified your mapping then delete previously created index in ES, Stop and start the logstach again.

Comments

0

There is a setting called "action.auto_create_index" see here Enable automatic creation of system indices

  1. Enable auto create index

you need to enable "action.auto_create_index " setting for your file in elasticsearch.yml

if you did not create it earlier , you can also use the kibana "Management > DevTools

PUT /_cluster/settings
{
  "persistent": {
    "action.auto_create_index": ".monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,logstash--*"
  }
}

above command will give you the acknowlegement whether it has added or not , if it is true , you can check it via below command whether it is reflected in cluster settings

GET /_cluster/settings

after all the steps are done , you should see the index created automatically when you start your redis , in my case it is filebeat .

  1. Add the index pattern In pervious version this was shown as "Index pattern" in latest version it is Dataviews . It is under Kibana > Dataviews .

create name , index pattern and timestamp ( if needed ) and then done .

You should be able to see those logs in Discover section now .

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.