I'm executing the following Python code, but when I launch many threads the remote API (Google API) returns:
<HttpError 403 when requesting https://www.googleapis.com/prediction/v1.6/projects/project/trainedmodels/return_reason?alt=json returned "User Rate Limit Exceeded">
I have around 20K objects which I need to launch at a time to be processed by API. This works fine with small amount of objects, how to slow down or send request by blocs ?
from threading import *
collection_ = []
lock_object = Semaphore(value=1)
def connect_to_api(document):
try:
api_label = predictor.make_prediction(document)
return_instance = ReturnReason(document=document) # Create Return Reason Object
lock_object.acquire() # Lock object
collection_.append(return_instance)
except Exception, e:
print e
finally:
lock_object.release()
def factory():
"""
:return:
"""
list_of_docs = file_reader.get_file_documents(file_contents)
threads = [Thread(target=connect_to_api, args=(doc,)) for doc in list_of_docs]
[t.start() for t in threads]
[t.join() for t in threads]