I'm teaching myself python by doing projects and I'm trying to request an API for store locations. There are 6000 stores, but the API only allows for 1 location at a time. As you can see in my code below, it is not very efficient. What is a more efficient way to request 1-6000 URLs? For URLs starting at http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=1 and ending at http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=6000
I tried using github.com/ross/requests-futures, but haven't been able to get it to work
import requests, json
from requests import Session
session = Session()
url = ['http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=%s' % n for n in xrange(1, 6000)]
header = {
'access_token': '12341234',
'country_code': 'US',
'language_code': 'en'}
r = session.get(url, headers=header)
dump = r.text
f = open("myfile.txt", "w")
f.write(dump)
Currently I get the following error: requests.exceptions.InvalidSchema: No connection adapters were found
What I'm essentially trying to do is the code below:
url = "http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=1"
url2 = "http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=2"
url3 = "http://www.ecommerce.com/stores?serviceTypes=-1&storeIds=3"
header = {
'access_token': '12341234',
'country_code': 'US',
'language_code': 'en'}
r = requests.get(url, headers=header)
r2 = requests.get(url2, headers=header)
r3 = requests.get(url3, headers=header)
dump = r.text + r2.text + r3.text
f = open("myfile.txt", "w")
f.write(dump)