3

I want to use elasticsearch engine for django-haystack. I was able to install every modules and packages successfully.

I run this command

sudo service elasticsearch start

, below is the result.

* Starting ElasticSearch Server                                         [ OK ]

After that, I run

python manage.py rebuild_index

It bring out this error

WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] yes

Removing all documents from your index because you said so.
Failed to clear Elasticsearch index: HTTPConnectionPool(host='127.0.0.1', port=9200): Max retries exceeded with url: /haystack (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
All documents removed.
Indexing 457 finhalls
ERROR:root:Error updating nateapp using default

requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=9200): Max retries exceeded with url: /_bulk (Caused by <class 'socket.error'>: [Errno 111] Connection refused)

Elasticsearchlog file:

   [2013-12-31 17:17:40,635][INFO ][node                     ] [Garrison Kane] version[0.90.9], pid[17314], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:17:40,635][INFO ][node                     ] [Garrison Kane] initializing ...
[2013-12-31 17:17:40,645][INFO ][plugins                  ] [Garrison Kane] loaded [], sites []
[2013-12-31 17:17:44,058][INFO ][node                     ] [Garrison Kane] initialized
[2013-12-31 17:17:44,059][INFO ][node                     ] [Garrison Kane] starting ...
[2013-12-31 17:17:44,195][INFO ][transport                ] [Garrison Kane] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:17:47,255][INFO ][cluster.service          ] [Garrison Kane] new_master [Garrison Kane][467vjQt7RTyOg8IEHSMKBg][inet[/192.241.129.232:9300]], reason: ze$
[2013-12-31 17:17:47,303][INFO ][discovery                ] [Garrison Kane] elasticsearch/467vjQt7RTyOg8IEHSMKBg
[2013-12-31 17:17:47,342][INFO ][http                     ] [Garrison Kane] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:17:47,343][INFO ][node                     ] [Garrison Kane] started
[2013-12-31 17:17:47,372][INFO ][gateway                  ] [Garrison Kane] recovered [0] indices into cluster_state
[2013-12-31 17:17:59,480][INFO ][cluster.metadata         ] [Garrison Kane] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:18:00,194][DEBUG][action.admin.indices.mapping.put] [Garrison Kane] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:23:36,774][INFO ][node                     ] [Rock Python] version[0.90.9], pid[17565], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:23:36,775][INFO ][node                     ] [Rock Python] initializing ...
[2013-12-31 17:23:36,783][INFO ][plugins                  ] [Rock Python] loaded [], sites []
[2013-12-31 17:23:40,156][INFO ][node                     ] [Rock Python] initialized
[2013-12-31 17:23:40,156][INFO ][node                     ] [Rock Python] starting ...
[2013-12-31 17:23:40,310][INFO ][transport                ] [Rock Python] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:23:43,390][INFO ][cluster.service          ] [Rock Python] new_master [Rock Python][mWPUd96mQyqnlBriAgy-9Q][inet[/192.241.129.232:9300]], reason: zen-di$
[2013-12-31 17:23:43,430][INFO ][discovery                ] [Rock Python] elasticsearch/mWPUd96mQyqnlBriAgy-9Q
[2013-12-31 17:23:43,457][INFO ][http                     ] [Rock Python] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:23:43,458][INFO ][node                     ] [Rock Python] started
[2013-12-31 17:23:44,424][INFO ][gateway                  ] [Rock Python] recovered [1] indices into cluster_state
[2013-12-31 17:35:55,614][INFO ][node                     ] [Wraith] version[0.90.9], pid[755], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:35:55,618][INFO ][node                     ] [Wraith] initializing ...
[2013-12-31 17:35:55,638][INFO ][plugins                  ] [Wraith] loaded [], sites []
[2013-12-31 17:35:59,536][INFO ][node                     ] [Wraith] initialized
[2013-12-31 17:35:59,537][INFO ][node                     ] [Wraith] starting ...
[2013-12-31 17:35:59,708][INFO ][transport                ] [Wraith] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:36:02,785][INFO ][cluster.service          ] [Wraith] new_master [Wraith][5Jgys5vjRcah6LIbcPPecQ][inet[/192.241.129.232:9300]], reason: zen-disco-join ($
[2013-12-31 17:36:02,829][INFO ][discovery                ] [Wraith] elasticsearch/5Jgys5vjRcah6LIbcPPecQ
[2013-12-31 17:36:02,861][INFO ][http                     ] [Wraith] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:36:02,862][INFO ][node                     ] [Wraith] started
[2013-12-31 17:36:04,072][INFO ][gateway                  ] [Wraith] recovered [1] indices into cluster_state
[2013-12-31 17:37:23,469][INFO ][cluster.metadata         ] [Wraith] [haystack] deleting index
[2013-12-31 17:37:23,726][INFO ][cluster.metadata         ] [Wraith] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:37:24,200][DEBUG][action.admin.indices.mapping.put] [Wraith] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
         at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:37:26,856][INFO ][cluster.metadata         ] [Wraith] [haystack] update_mapping [modelresult] (dynamic)
[2013-12-31 17:50:56,446][INFO ][node                     ] [Nikki] version[0.90.9], pid[754], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:50:56,446][INFO ][node                     ] [Nikki] initializing ...
[2013-12-31 17:50:56,462][INFO ][plugins                  ] [Nikki] loaded [], sites []
[2013-12-31 17:50:59,849][INFO ][node                     ] [Nikki] initialized
[2013-12-31 17:50:59,850][INFO ][node                     ] [Nikki] starting ...
[2013-12-31 17:50:59,977][INFO ][transport                ] [Nikki] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:51:03,174][INFO ][cluster.service          ] [Nikki] new_master [Nikki][e-voUaukTnKHaj50uQDsrA][inet[/192.241.129.232:9300]], reason: zen-disco-join (el$
[2013-12-31 17:51:03,227][INFO ][discovery                ] [Nikki] elasticsearch/e-voUaukTnKHaj50uQDsrA
[2013-12-31 17:51:03,264][INFO ][http                     ] [Nikki] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:51:03,265][INFO ][node                     ] [Nikki] started
[2013-12-31 17:51:04,622][INFO ][gateway                  ] [Nikki] recovered [1] indices into cluster_state
[2013-12-31 17:52:20,253][INFO ][cluster.metadata         ] [Nikki] [haystack] deleting index
[2013-12-31 17:52:20,496][INFO ][cluster.metadata         ] [Nikki] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:52:20,973][DEBUG][action.admin.indices.mapping.put] [Nikki] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
         at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:52:23,817][INFO ][cluster.metadata         ] [Nikki] [haystack] update_mapping [modelresult] (dynamic)
[2013-12-31 21:14:55,585][INFO ][node                     ] [Wingfoot, Wyatt] version[0.90.9], pid[753], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 21:14:55,588][INFO ][node                     ] [Wingfoot, Wyatt] initializing ...
[2013-12-31 21:14:55,604][INFO ][plugins                  ] [Wingfoot, Wyatt] loaded [], sites []
[2013-12-31 21:14:59,147][INFO ][node                     ] [Wingfoot, Wyatt] initialized
[2013-12-31 21:14:59,148][INFO ][node                     ] [Wingfoot, Wyatt] starting ...
[2013-12-31 21:14:59,275][INFO ][transport                ] [Wingfoot, Wyatt] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 21:15:02,599][INFO ][cluster.service          ] [Wingfoot, Wyatt] new_master [Wingfoot, Wyatt][lRhJ4RD0Q9uLoHbaYCPFzA][inet[/192.241.129.232:9300]], reason$
[2013-12-31 21:15:02,648][INFO ][discovery                ] [Wingfoot, Wyatt] elasticsearch/lRhJ4RD0Q9uLoHbaYCPFzA
[2013-12-31 21:15:02,682][INFO ][http                     ] [Wingfoot, Wyatt] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 21:15:02,683][INFO ][node                     ] [Wingfoot, Wyatt] started
[2013-12-31 21:15:04,150][INFO ][gateway                  ] [Wingfoot, Wyatt] recovered [1] indices into cluster_state

Search_index.py

class FinhallIndex(indexes.SearchIndex, indexes.Indexable):
    text=indexes.CharField(document=True,use_template=True)
    name=indexes.CharField(model_attr='name')
    address=indexes.CharField(model_attr='address')

    def get_model(self):
        return Finhall

    def index_queryset(self, using=None):
        return self.get_model().objects.filter(pub_date__lte=datetime.datetime.now())

I installed elasticsearch through .deb

What am I missing?

9
  • Have you verified that elasticsearch is running? Are there any errors in /var/log/elasticsearch/elasticsearch.log? Commented Dec 31, 2013 at 18:57
  • Here's what is in my log file /var/log/elasticsearch/elasticsearch.log at org.elasticsearch.index.mapper.MapperService.parse(MapperService.jav$ at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute($ at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.$ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto$ at java.lang.Thread.run(Thread.java:724) [2013-12-31 17:52:23,817][INFO ][cluster.metadata ] [Nikki] [haystack] $ Commented Dec 31, 2013 at 19:49
  • I'll need more of the log, the error itself is not in that snippet. Please edit your question with the full error Commented Dec 31, 2013 at 21:31
  • kindly check the question Commented Dec 31, 2013 at 21:44
  • Sorry, but the log is still cut off at the right side... Commented Dec 31, 2013 at 22:08

1 Answer 1

2

It's a haystack issue https://github.com/toastdriven/django-haystack/issues/866

Set indexed=False on your BooleanFields

Sign up to request clarification or add additional context in comments.

1 Comment

not using elasticsearch for now. will check it out later. :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.