1

I am parsing json file and I have one attribute which is coming for twice. So I want to drop one the attribute and so I can avoid ambiguous error. Here is the sample json. For example, address1 and Address1 has same value but only difference is first char is Capital letter. So I want to remove one of them from the json parsing in spark scala.

{
    "ID": 1,
    "case": "12",
    "addresses": {
        "": [{
            "address1": "abc",
            "address2": "bkc",
            "Address1": "abc",
            "Address2": "bk"
        }, {
            "address1": "ede",
            "address2": "ak",
            "Address1": "ede",
            "Address2": "ak"
        }]
    },
    "FirstName": "abc",
    "LastName": "cvv"
}

Could some one guide me how to remove one of them while we are doing json parsing in spark scala. I need to automate this that means now we are facing issue with address and in future some other attributes may be similar issue. So instead of hard coding it, we may need to look for the solution for all the cases where we are facing similar issue.

2
  • are you sure that this is a valid schema? The name of the array is empty "" this is not a valid property name in json syntax. You can try to parse the given json with spark as shown here stackoverflow.com/questions/38271611/…. Spark will ignore it since a empty name is not allowed Commented Dec 8, 2019 at 23:20
  • Hi Alexandros, The array name is addresses. I didn't put complete json which I am receiving. I just provided sample attributes how I am receiving the attribute name as "address1" and "Address1" and so I want to drop one of them. Commented Dec 9, 2019 at 5:29

1 Answer 1

1
val jsonString = """
{
    "ID": 1,
    "case": "12",
    "addresses": [{
    "address1": "abc",
    "address2": "bkc",
    "Address1": "abc",
    "Address2": "bk"
    }, {
    "address1": "ede",
    "address2": "ak",
    "Address1": "ede",
    "Address2": "ak"
    }],
    "FirstName": "abc",
    "LastName": "cvv"
}
"""
val jsonDF = spark.read.json(Seq(jsonString).toDS)


import org.apache.spark.sql.functions._

//Add this before using drop
sqlContext.sql("set spark.sql.caseSensitive=true")

jsonDF.withColumn("Addresses", explode(col("addresses")))
  .selectExpr("Addresses.*", "ID","case","FirstName","LastName")
  .drop("address1","address2")
  .show()
Sign up to request clarification or add additional context in comments.

1 Comment

to make it generic, please pass the columns you want to remove to the drop method

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.