You could try:
a = numpy.random.rand(100,200)
indices = numpy.random.randint(100,size=20)
b = a[np.setdiff1d(np.arange(100),indices),:]
This avoids creating the mask array of same size as your data in https://stackoverflow.com/a/21022753/865169. Note that this example creates a 2D array b instead of the flattened array in the latter answer.
A crude investigation of runtime vs memory cost of this approach vs https://stackoverflow.com/a/30273446/865169 seems to suggest that delete is faster while indexing with setdiff1d is much easier on memory consumption:
In [75]: %timeit b = np.delete(a, indices, axis=0)
The slowest run took 7.47 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 24.7 µs per loop
In [76]: %timeit c = a[np.setdiff1d(np.arange(100),indices),:]
10000 loops, best of 3: 48.4 µs per loop
In [77]: %memit b = np.delete(a, indices, axis=0)
peak memory: 52.27 MiB, increment: 0.85 MiB
In [78]: %memit c = a[np.setdiff1d(np.arange(100),indices),:]
peak memory: 52.39 MiB, increment: 0.12 MiB