12

I recently upgraded my ELK stack (logstash 2.3.4 using redis 3.2.3, Elasticsearch 2.3.5 and Kibana 4.5.4) from (logstash 1.4.1/1.4.2 using redis 2.8.24, Elasticsearch 1.2.2 and Kibana 3.1.1). The upgrade went well but after the upgrade I had some fields that had conflicting types. This specific fields were dynamically created by logstash so there was no overall mapping in Elasticsearch. I spent a fair amount of time searching on how to change this. Every online article stated I couldn't simply change the field type of the existing data. Many articles referenced I needed to reindex but failed to explain how. Below are the exact steps I did to change the type and reindex.

Get the mapping from the current index needing the field type changed:

curl -XGET http://localhost:9200/logstash-2016.05.30/_mapping?pretty=1 > logstash-2016.05.30

Edit the logstash-2016.05.30 file removing the 2nd line (index name) and 2nd last line (curly bracket) in the file. Failing to do this will NOT update the mappings. I suppose if you edit the index name to the new name it would work but I didn't try that (should have tried I guess).

Edit the logstash-2016.05.30 file and change the type (i.e. long to string or string to long). You can use the exactly definition used by a similar field.

"http_status" : {
  "type" : "string",
  "norms" : {
    "enabled" : false
  },
  "fields" : {
    "raw" : {
      "type" : "string",
      "index" : "not_analyzed",
      "ignore_above" : 256
    }
  }
},

Change to:

"http_status" : {
  "type" : "long"
},

Next create the new index (append _new or whatever you want)

curl -XPUT  http://localhost:9200/logstash-2016.05.30_new -d @logstash-2016.05.30

Double check the mapping was created correctly

curl -XGET http://localhost:9200/logstash-2016.05.30_new/?pretty

Reindex using the following:

curl -XPOST  http://localhost:9200/_reindex -d '{
"source": {
"index" : "logstash-2016.05.30"
},
"dest" : {
"index" : "logstash-2016.05.30_new"
}
}'

Count the entries in both indexes (should be the same)

curl -XGET http://localhost:9200/logstash-2016.05.30/_count
curl -XGET http://localhost:9200/logstash-2016.05.30_new/_count

Once satisfied the reindexing was successfully delete the old index

curl -XDELETE http://localhost:9200/logstash-2016.05.30

Create an alias so the old index name can still be used

curl -XPOST  http://localhost:9200/_aliases -d '{
"actions": [
{ "add": {
"alias": "logstash-2016.05.30",
"index": "logstash-2016.05.30_new"
}}
]
}'

Lastly, Navigate to Kibana and select Settings and the Index Pattern. Click the reload icon to refresh the field list. All conflicts should be removed.

Obliviously, this isn't really a question unless you feel this could be done another way or these is a problem with doing this.

1 Answer 1

1

For Elasticsearch 6, a few small changes are required. Otherwise follow all the instructions closely.

To obtain the mapping use pretty=true instead of pretty=1:

curl -XGET http://localhost:9200/logstash-2016.05.30/_mapping?pretty=true > logstash-2016.05.30

For all XPUT/XPOST requests, the content type must now be set to application/json.

curl -XPUT  http://localhost:9200/logstash-2016.05.30_new \
  -H 'Content-Type: application/json' -d @logstash-2016.05.30
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.