I'm trying to extract some data from elasticsearch with pyspark. I want to extract only few fields (not all) from the documents. So, I'm making a post request from the software "Postman" (for testing purpose) with following url and body.It's giving perfect output as expected. But when I'm using same body with spark code, it's extracting all the fields from the specified documents which is not desired. Can anyone tell what might be the reason for such weird behavior ? Thanks in advance !
Spark version 2.3, Elasticsearch version 6.2, postman body type = application/json
This is what I'm doing with postman :
`url : localhost:9200/test-index4/school/_search`
`body :
{
"query":
{
"ids":
{
"values":["8","9","10"]
}
},
"_source":
{
"includes":["name"]
}
}`
Below is what I'm doing with pyspark :
`body = "{"query":{"ids":{"values":["8","9","10"]}},"_source":{"includes":["name"]}}"
df = self.__sql_context.read.format("org.elasticsearch.spark.sql") \
.option("es.nodes", "localhost") \
.option("es.port", "9200") \
.option("es.query", body) \
.option("es.resource", "test-index4/school") \
.option("es.read.metadata", "true") \
.option("es.read.metadata.version", "true") \
.option("es.read.field.as.array.include", "true") \
.load()
`
es.read.field.includeconfig.es.read.metadata.versiondoesn't read version. Can you please explain why ?es.read.field.includeis not set, then it defaults to null and all fields are returned. As for themetadata.versionI don't know, since you've set bothes.read.metadataandes.read.metadata.versionto true which is the correct configuration from what I can tell.