I have managed to import multiple csv files (contained in one folder) into an SQLite database - thanks to the very useful feedback I received on my previous question on this forum.
A16_B1_T5 A16_B1_T6 contain data from the same sensor, measuring temperature and humidity. However, they have been collected at different times of the year, therefore they always have significant overlap (i.e. T5 might indicate data collected from April->October 2015, while T6 July->December 2015).
I am now trying to merge two or more tables (originally corresponding to separate csv files) into one. For the reference example, A16_B1_T5 and A16_B1_T6 should be merged into A16_B1_T(or A16_B1_TT). This would mean append as well as overwrite/delete duplicate data.
Any tips on how to do it? Original working code for batch importing csv into sqlite is the following:
import csv
import sqlite3
import glob
import os
def do_directory(dirname, db):
for filename in glob.glob(os.path.join(dirname, '*.csv')):
do_file(filename, db)
def do_file(filename, db):
with open(filename) as f:
with db:
data = csv.DictReader(f)
cols = data.fieldnames
table=os.path.splitext(os.path.basename(filename))[0]
sql = 'drop table if exists "{}"'.format(table)
db.execute(sql)
sql = 'create table "{table}" ( {cols} )'.format(
table=table,
cols=','.join('"{}"'.format(col) for col in cols))
db.execute(sql)
sql = 'insert into "{table}" values ( {vals} )'.format(
table=table,
vals=','.join('?' for col in cols))
db.executemany(sql, (list(map(row.get, cols)) for row in data))
if __name__ == '__main__':
connection = sqlite3.connect('C:/ROAST/3_ANALYSIS/03_SQL-PY/primo.db')
do_directory('C:/ROAST/3_ANALYSIS/03_SQL-PY\A08_csv',connection)