If you have 4 GB of memory, don't worry about it (the time it will take you to program a less memory intensive solution is not worth it). Read the entire dataset in using pd.read_csv and then subset to just the column that you need. If you don't have enough memory and you really need to read the file line by line (i.e. row by row), modify this code to only keep the column of interest in memory.
If you have plenty of memory and your problem is that you have multiple files in this format, then I would recommend using the multiprocessing package to parallelize the task.
from muliprocessing import Pool
pool = Pool(processes = your_processors_n)
dataframeslist = pool.map(your_regular_expression_readin_func, [df1, df2, ... dfn])