4

I have to construct quite a non-trivial (as it seems to be now) query in Elasticsearch. Suppose I have a couple of entities, each with an array element, consisting of strings:

1). ['A', 'B']
2). ['A', 'C']
3). ['A', 'E']
4). ['A']

Mappings for array element is as follows (using dynamic templates):

{
  "my_array_of_strings": {
    "path_match": "stringArray*",
    "mapping": {
      "type": "string",
      "index": "not_analyzed"
    }
  }
}

Json representation of entity looks like this:

{
  "stringArray": [
    "A",
    "B"
  ]
}

Then I have user input: ['A', 'B', 'C'].

What I want to achieve is to find entities which contain only elements specified in input - expected results are: ['A', 'B'], ['A', 'C'], ['A'] but NOT ['A', 'E'] (because 'E' is not present in user input).

Can this scenario be implemented with Elasticsearch?

UPDATE: Apart from the solution with using the scripts - which should work nicely, but will most likely slow down the query considerably in case when there are many records that match - I have devised another one. Below I will try to explain its main idea, without code implementation.

One considerable condition that I failed to mention (and which might have given other users valuable hint) is that arrays consist of enumerated elements, i.e. there are finite number of such elements in array. This allows to flatten such array into separate field of an entity.

Lets say there are 5 possible values: 'A', 'B', 'C', 'D', 'E'. Each of these values is a boolean field - true if it is empty (i.e. array version would contain this element ) and false otherwise. Then each of the entities could be rewritten as follows:

1).
A = true
B = true
C = false
D = false
E = false

2).
A = true
B = false
C = true
D = false
E = false

3).
A = true
B = false
C = false
D = false
E = true

4).
A = true
B = false
C = false
D = false
E = false

With the user input of ['A', 'B', 'C'] all I would need to do is: a) take all possible values (['A', 'B', 'C', 'D', 'E']) and subtract from them user input -> result will be ['D', 'E']; b) find records where each of resulting elements is false, i.e. 'D = false AND E = false'.

This would give records 1, 2 and 4, as expected. I am still experimenting with the code implementation of this approach, but so far it looks quite promising. It has yet to be tested, but I think this might perform faster, and be less resource demanding, than using scripts in query.

To optimize this a little bit further, it might be possible not to provide fields which will be 'false' at all, and modify the previous query to 'D = not exists AND E = not exists' - result should be the same.

4
  • You can create a custom analyzer during mapping and then search each word (character before a comma) separately. Commented Jan 19, 2016 at 11:00
  • 1
    Could you provide the mapping of the doc, please? It's easier to provide feedback on details (based on: is the above string values '[A, E]' or arrays of strings ['A', 'E']?). I assume ['A', 'E'] but specifying it in the question allows for more clarity... Commented Jan 19, 2016 at 11:06
  • @Calle, thanks for pointing this out. I will update the question Commented Jan 19, 2016 at 11:33
  • @MobasherFasihy thanks for the suggestion, can you please provide an example? This is an array of strings, not a string - so I do not see any use case for custom analyzer. Commented Jan 19, 2016 at 11:39

2 Answers 2

5

You can achieve this with scripting, This is how it looks

{
  "query": {
    "filtered": {
      "filter": {
        "bool": {
          "must": [
            {
              "terms": {
                "name": [
                  "A",
                  "B",
                  "C"
                ]
              }
            },
            {
              "script": {
                "script": "if(user_input.containsAll(doc['name'].values)){return true;}",
                "params": {
                  "user_input": [
                    "A",
                    "B",
                    "C"
                  ]
                }
              }
            }
          ]
        }
      }
    }
  }
}

This groovy script is checking if the list contains anything apart from ['A', 'B', 'C'] and returns false if it does, so it wont return ['A', 'E']. It is simply checking for sublist match. This script might take couple of seconds. You would need to enable dynamic scripting, also syntax might be different for ES 2.x, let me know if it does not work.

EDIT 1

I have put both conditions inside filter only. First only those documents that have either A, B or C are returned, and then script is applied on only those documents, so this would be faster than the previous one. More on filter ordering

Hope this helps!!

Sign up to request clarification or add additional context in comments.

10 Comments

Thanks a lot for this idea, I will certainly give it a try! However, I have read that scripting slows down queries considerably. In my case there might be up to 10k results returned, so running a script on each of the result might be quite slow. I have devised another solution, which I will try to implement as well - I will update my post
Yes, you are right, scripts do slow down queries, but It should not be more than few seconds(4-5 max, I am guessing), but let us know about your solution too, In the meantime I will also think about something else, maybe we can do something during analysis phase.
I have edited my answer to make it faster than the previous one. pls have a look.
That's a good addition, I was also thinking about suggesting it. Btw, 'do_not_return' variable in groovy script could be renamed to 'is_included' or something like this - just for better readability.
Yes, I have edited the answer, I have also added break to avoid further computation once we find the undesired element.
|
-1

In same case for me I have done the follow steps:

First of all I have deleted the index to redefine analyzer/settings with sense plugin.

DELETE my_index

Then I have defined custom analyzer for my_index

PUT my_index
{
  "index" : {
    "analysis" : {
        "tokenizer" : {
            "comma" : {
                "type" : "pattern",
                "pattern" : ","
            }
        },
        "analyzer" : {
            "comma" : {
                "type" : "custom",
                "tokenizer" : "comma"
            }
        }
    }
  }
}

Then I have defined mapping properties inside my code, but you can also do that with sense. both of them are same.

PUT /my_index/_mapping/my_type
{
        "properties" : {
            "conduct_days" : {
                "type" : "string",
                "analyzer" : "comma"
            }
        }
}

Then For testing do these bellow steps:

PUT /my_index/my_type/1
{
    "coduct_days" : "1,2,3"
}

PUT /my_index/my_type/2
{
    "conduct_days" : "3,4"
}

PUT /my_index/my_type/3
{
    "conduct_days" : "1,6"
}

GET /my_index/_search
{
    "query": {"match_all": {}}
}

GET /my_index/_search
{
    "filter": {
       "or" : [
          { 
            "term": {
               "coduct_days": "6"
            }
          },
          {
            "term": {
               "coduct_days": "3"
            }
          }
       ]
    }
}

2 Comments

Interesting... does this work in you case? Using the query you specified, no records could be found. But more surprisingly, it does not find anything using any combination of "conduct_days" - I've tried "1", "2" and "3" in a single query (which should definitely return 1st record), and still nothing.
Yeah, In a case with me, it is working. conduct_days has like 1,2,3,4,5 values, and input data may contain like 1, or like 1,2,4 and I can search separately for each of input values.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.