0

I've been trying to get the extract a JSON requests API result into a csv but I'm struggeling,

so far the below code return the value in the terminal

import requests
import csv
import json
from pprint import pprint
import urllib
import pandas as pd

url = "https://esi.evetech.net/latest/markets/10000002/orders/?datasource=tranquility&order_type=all&page=1"

payload={}
headers = {}
r = {}

response = urllib.request.urlopen(url)

text = response.read()
json_data = json.loads(text)

pprint(json_data)
df = pd.read_json(r)
df.to_csv("output.csv")

however Pandas return me with an error in regards of the Dictionary class? ( apologize I'm not really familiar when it come to coding )

Last question would be, what would be the logic to continue the url request beyond page=1 in the url variable? (order_type=all&page=1 ) I don't know how many page the system have

thanks

1
  • 1
    Can you post the error trace you are getting to the question Commented Jan 6, 2021 at 12:04

2 Answers 2

1

The error is reproducible with the code above and looks like some encoding error with theurlib library. But I was able to load the data successfully using the python requests module

import requests
url = "https://esi.evetech.net/latest/markets/10000002/orders/datasource=tranquility&order_type=all&page=1"
response = requests.get(url)
data = response.text
df = pd.read_json(data)
df.to_csv("output.csv")
Sign up to request clarification or add additional context in comments.

1 Comment

That worked as well thanks a lot Justa! I'm trying to figure out a way for the page=1 now
1

You've done r = {} and then df = pd.read_json(r), so why would that work?

You can do either one of the following:

# from your code, above
text = response.read()

# create DF from serialized JSON data (a string)
df = pd.read_json(text)

Or:

# from your code, above
text = response.read()
json_data = json.loads(text)

# create DF from loaded JSON data (in your case a list of dicts)
df = pd.DataFrame.from_records(json_data)

And the line import urllib should be import urllib.request. Or just use the 'requests' package, which you've also imported already.


Part 2:

To get all the pages, keep incrementing the value of page until you get an error or an empty response. (It's bad form to ask multiple unrelated questions on SO.) See this answer for guidance on how.

3 Comments

Perfect, that worked out I've got confuse as I never used pandas before - any suggestion for the page=1 ?
Btw, if you're happy with the answer, "accept" it.
Edited. Either try writing the code for that with the link provided, or post a new question.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.