I have a question on how to add entries from 100 files (each file contains two columns) and then writing them to a new file(which will also contain two columns)?
-
3What have you tried? This is pretty straightforward.Has QUIT--Anony-Mousse– Has QUIT--Anony-Mousse2013-02-05 12:55:18 +00:00Commented Feb 5, 2013 at 12:55
-
Why do you throw a question here and then do not come back reply to any comment or answer? This is definitely not kind. The first answers appeared a couple of minutes later than your questions.Thorsten Kranz– Thorsten Kranz2013-02-06 09:38:29 +00:00Commented Feb 6, 2013 at 9:38
4 Answers
This is very underspecified. It's not clear what your problem is.
Probabably you'd do something like:
entries = []
for f in ["file1.txt", "file2.txt", ..., "file100.txt"]:
entries.append(open(f).readlines())
o = open("output.txt", "w")
o.writelines(entries)
o.close()
1 Comment
Wasn't sure if you needed a solution to find all those 100 files as well? If so, here is one approach including reading them all and writing them to a joined file:
from os import walk
from os.path import abspath
lines = []
for root, folders, files in walk('./path/'):
for file in files:
fh = open(abspath(root + '/' + file), 'rb')
lines.append(fh.read())
fh.close()
# break if you only want the first level of your directory tree
o = open('output.txt', 'wb')
o.write('\n'.join(lines))
o.close()
You could also do a "memory efficient" solution:
from os import walk
from os.path import abspath
o = open('output.txt', 'wb')
for root, folders, files in walk('./path/'):
for file in files:
fh = open(abspath(root + '/' + file), 'rb')
for line in fh.readline():
o.write(line)
del line
fh.close()
del fh
# break if you only want the first level of your directory tree
o.close()
Much of this is automated (I think) within Python, but lazy or not, if you can then remove objects from the memory after closing the files and before and before reusing variable names.. just in case?
2 Comments
a more scalable way, inspired by Torxed approach
from os import walk
from os.path import abspath
with open('output.txt', 'wb') as o:
for root, folders, files in walk('./path/'):
for filename in files:
with open(abspath(root + '/' + filename), 'rb') as i:
for line in i:
o.write(line)
2 Comments
with ... construct. I improved my answer basing on this discussionDo you want to chain them? I.e., do you want all lines of file 1, then all lines of file 2, ... Or do you want to merge them? Line 1 of file 1, line 1 of file 2, ...
For the first case:
from itertools import chain
filenames = ...
file_handles = [open(fn) for fn in filenames]
with open("output.txt", "w") as out_fh:
for line in chain(file_handles):
out_fh.write(line)
for fh in file_handles:
fh.close()
For the second case:
from itertools import izip_longest
filenames = ...
file_handles = [open(fn) for fn in filenames]
with open("output.txt", "w") as out_fh:
for lines in izip_longest(*file_handles, fillvalue=None):
for line in lines:
if line is not None:
out_fh.write(line)
for fh in file_handles:
fh.close()
Important: Never forget to close your files!
As @isedev pointed out, this approach is o.k. for 100 files, but as I open all handles immediately, for thousands this won't work.
If you want to overcome this problem, only option 1 (chaining) is reasonable...
filenames = ...
with open("output.txt", "w") as out_fh:
for fn in filenames:
with open(fn) as fh:
for line in fh:
out_fh.write(line)
7 Comments
with open(filename) as fp: idiom, you never have to remember to close them.for line in fh: provide the same result?