0

I have a csv file which i am reading in python and I am producing this JSON object using

data_df_json = data_df.to_json(orient='records',date_format = 'iso');

{  
      {  
         "A":"aaa",
         "B":"nnn",
         "C":"ccc",
         "D":100000036789562,
         "E":"sdsds",
         "F":130346,
         "G":"2017-09-05T16:36:30.000Z"
      }

}

I am trying to achieve this

{  
"Items":[
      {  
         "A":"aaa",
         "B":"nnn",
         "C":"ccc",
         "D":100000036789562,
         "E":"sdsds",
         "F":130346,
         "G":"2017-09-05T16:36:30.000Z"
      }
  ]
}

So what i did was to insert in a default column and set in the default values to Items. I then did a group by

data_df_json = engagement_data_df.groupby('Items').apply(lambda df: data_df.to_dict(orient='records')).to_json(date_format='iso')

Its giving me the right format but now with an additional field called Items

{  
"Items":[
      {  
         "A":"aaa",
         "B":"nnn",
         "C":"ccc",
         "D":100000036789562,
         "E":"sdsds",
         "F":130346,
         "G":"2017-09-05T16:36:30.000Z",
         "Items": "Items"

      }
  ]
} 

I dont want the items contained in my object. Is there a better way?

0

1 Answer 1

1

The issue is that you insert in a default column and set in the default values to Items, the data_df will contain the new column Items, that's why there is an additional field called Items, you can drop the column before you convert to dict like this:

data_df_json = engagement_data_df.groupby('Items').apply(lambda df: data_df.drop('Items', axis=1).to_dict(orient='records')).to_json(date_format='iso')
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.