Let me start by saying I'm fairly new to python.
Ok so I'm running code to perform physics calculations/draw graphs etc on data files, and what I need to do is loop over a number of files and sub-files. But the problem is there are a different number of sub-files in each file (e.g. file 0 has 711 sub-files, file 1 has 660 odd). It obviously doesn't like it when I run it across a file that doesn't have a sub-file at index x, so I was wondering is there a way to get it to run (iterate?) up to the final limit in each file automatically?
What I've got is a nested loop like:
for i in range(0,120):
for j in range(0,715):
stuff
Cheers in advance for any help, and sorry if my explanation is bad!
Edit: some more of the code. So what I'm actually doing is calculating/plotting the angular momentum of gas and dark matter particles. These are in halos (j), & there are a number of files (i) containing lots and lots of these halos.
import getfiby
import numpy as np
import pylab as pl
angmom=getfiby.ReadFIBY("B")
for i in range(0,120):
for j in range(0,715):
pos_DM = angmom.getParticleField(49,i,j,"DM","Coordinates")
vel_DM = angmom.getParticleField(49,i,j,"DM","Velocity")
mass_DM = angmom.getParticleField(49,i,j,"DM","Mass")
more stuff
getfiby is a code I was given that retrieves all the data from the files (that I can't see). It's not really a massive problem, as I can still get it to run the calculations & make plots even if the upper limit I put on my range for j exceeds the number of halos there are in a particular file (I get: Halo index out of bounds. Goodbye.) But yeah I just wondered if there was a nicer, tidier way of getting it to run.
rangetoxrangewill perform better under python2. If it's python3, leave it as is.