sundance's answer might be correct in terms of usage, but the benchmark is just wrong.
As correctly pointed out by moobie, an index 3 already exists in this example, which makes access way quicker than with a non-existent index. Have a look at this:
%%timeit
test = pd.DataFrame({"A": [1,2,3], "B": [1,2,3], "C": [1,2,3]})
for i in range(0,1000):
testrow = pd.DataFrame([0,0,0])
pd.concat([test[:1], testrow, test[1:]])
2.15 s ± 88 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
test = pd.DataFrame({"A": [1,2,3], "B": [1,2,3], "C": [1,2,3]})
for i in range(0,1000):
test2 = pd.DataFrame({'A': 0, 'B': 0, 'C': 0}, index=[i+0.5])
test.append(test2, ignore_index=False)
test.sort_index().reset_index(drop=True)
972 ms ± 14.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
test = pd.DataFrame({"A": [1,2,3], "B": [1,2,3], "C": [1,2,3]})
for i in range(0,1000):
test3 = [0,0,0]
test.loc[i+0.5] = test3
test.reset_index(drop=True)
1.13 s ± 46 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Of course, this is purely synthetic, and I admittedly wasn't expecting these results, but it seems that with non-existent indices .loc and .append perform quite similarly. Just leaving this here.
time.sleep(30)until its time to get the next set of data. My worry was that it becomes larger that the load time will start to expand the time between each sample. From this question link it seems that at a size of 6000 it takes 2.29 seconds. I would like if possible to keep that number to a minimum.next_time += 30, time.sleep(next_time-time.time())