3

In queries like aggregations and cardinality search there might be a timeout.

I noticed that when executing queries from python client the response sometimes contains:

{  
   "took":1200184,
   "timed_out":true,
   "_shards":{  
      "total":84,
      "successful":84,
      "failed":0
   }

And returns less results than the expected.

My main problem is that when timeout occurs, response still contains a number of results. I could check if timeout is true before parsing response results but there is probably a better way to do that :)... like raise an exception or somehow catch timeout and retry

1
  • I have a similar problem, except that the "took" value is much less - 82 milliseconds only. Commented Dec 4, 2014 at 16:39

2 Answers 2

3

You can increase the timeout for elasticsearch using:-

es.search(index="my_index",
      doc_type="document",
      body=get_req_body(),
      request_timeout=30)

By default the value assigned is 10. If ,on the other hand you want to catch exception you can use a scheduler and check the time elapsed and catch the exception if it exceeds the time limit.

Sign up to request clarification or add additional context in comments.

Comments

0

Elasticsearch-py client has a named argument you can pass that will let you set timeout value for the search request.

But I'd suggest using scrolling to obtain results in such scenarios, it is similar to a cursor for database query. Here's a really good example of how to use scrolling. With a limited scroll size, the request is less likely to timeout and you will be able to fetch all the results instead of receiving partial results.

Example search call with timeout parameter

es.search(index="index", doc_type="doc_type", body=body, timeout=50)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.