0

I need help with saving data I read with API key to csv. The code I have is below:

import requests
import pandas as pd
def get_precip(gooddate):
    urlstart = 'http://api.wunderground.com/api/API_KEY/history_'
    urlend = '/q/Switzerland/Zurich.json'
    url = urlstart + str(gooddate) + urlend
    data = requests.get(url).json()

    for summary in data['history']['dailysummary']:
        abc = ','.join((gooddate,summary['date']['year'],summary['date']['mon'],summary['date']['mday'],summary['precipm'], summary['maxtempm'], summary['meantempm'],summary['mintempm']))
        df = pd.DataFrame(data=abc)
        df.to_csv('/home/user/Desktop/2013_weather.csv', index=False)

if __name__ == "__main__":
    from datetime import date
    from dateutil.rrule import rrule, DAILY

    a = date(2013, 1, 1)
    b = date(2013, 12, 31)

    for dt in rrule(DAILY, dtstart=a, until=b):
        get_precip(dt.strftime("%Y%m%d"))

I'm sure that can't work this way, because it need to be saved into some list or dictionary before transform into dataframe, but not sure how to do that this time. If save it to the list it will give me just one row? Any help is welcomed. Thanks.

1
  • 2
    Take a look at the read_json function of pandas. It also accepts urls as input as stated in the docs. Commented Dec 19, 2017 at 13:05

1 Answer 1

1

I think you can return tuples from get_precip, append them to list and use DataFrame constructor:

def get_precip(gooddate):
    urlstart = 'http://api.wunderground.com/api/API_KEY/history_'
    urlend = '/q/Switzerland/Zurich.json'
    url = urlstart + str(gooddate) + urlend
    data = requests.get(url).json()

    for summary in data['history']['dailysummary']:
        return (gooddate,summary['date']['year'],summary['date']['mon'],summary['date']['mday'],summary['precipm'], summary['maxtempm'], summary['meantempm'],summary['mintempm'])

if __name__ == "__main__":
    from datetime import date
    from dateutil.rrule import rrule, DAILY

    a = date(2013, 1, 1)
    b = date(2013, 12, 31)

    L = []
    for dt in rrule(DAILY, dtstart=a, until=b):
        tup = get_precip(dt.strftime("%Y%m%d"))
        L.append(tup)

what is same as:

    L = [get_precip(dt.strftime("%Y%m%d")) for dt in rrule(DAILY, dtstart=a, until=b)]

    cols = ['date','date.year','date.mon','date.mday','precipm','maxtempm', 
            'meantempm','mintempm']     
    df = pd.DataFrame(L, columns=cols)
    print (df.head())

           date date.year date.mon date.mday precipm maxtempm meantempm mintempm
    0  20130101      2013       01        01     0.0        7         2       -2
    1  20130102      2013       01        02     0.0        5         2       -3
    2  20130103      2013       01        03     0.0        4         0       -3
    3  20130104      2013       01        04     0.0        7         5        3

   df.to_csv('/home/user/Desktop/2013_weather.csv', index=False)
Sign up to request clarification or add additional context in comments.

7 Comments

Thanks man, I think it's pratty much good, but I got an error : File "weather_req.py", line 35, in get_precip for summary in data['history']['dailysummary']: KeyError: 'history' . Do you know what that could be?
Do you change API_KEY ?
Hmmm, I think there should be reason API is not filtered only for some count of rows? I test it only with first 10, b = date(2013, 1, 10) and it working perfectly.
I think I find the problem. Zurich is provided, but some other cities that I need (Zurich was example) are not. Belgrade, Serbia (on this website is still Yugoslavia) is not, it doesn't have a history. Can you please try with Yugoslavia/Belgrade to be sure?
Unfortunately now I am on phone only. So tomorow it will be possible only.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.