Am trying to read Clob data from test db and inserting it in dev db. Am able to do that but the performance is very poor. For 100K rows its taking 8 to 12 hours, and running it from my local machine. Am wondering if my approach is correct or is there any better way in doing it. Below is my code after connections:
for row in rows.fetchall()
x = []
data = row.read
json_data = json.loads(data)
x.append(json_data)
This is how am doing it. Just wanted to know if there is any better way to do it. Stack : Python, OracleDB, cx_oracle, json Thanks
printis really expensive. You could tryfor x, row in enumerate(rows.fetchall()):and then hide theprintunderif x % 10000 == 0: print rowbut I also don't think it's enough to explain the slowness.fetchall(). After that I have no familiarity withClobto say whether you could get around usingjson.loads()on each row.