2

I have a table looks like this:

part    min    max    unitPrice
A       1      9      10
A       10     99     5
B       1      9      11
B       10     99     6
...

I also have a production table that I need to insert the previous data into this production one. When I do the select statement from one table and fetch the record, I have a hard time insert into another table.

Say

cursor_table1.execute('select part, min, max, unitPrice, now() from table1')
for row in cursor_table1.fetchall():
    part, min, max, unitPrice, now = row
    print part, min, max, unitPrice, now

The result turns out to be

'416570S39677N1043', 1L, 24L, 48.5, datetime.datetime(2018, 10, 8, 16, 33, 42)

I know Python smartly figured out the type of every column but I actually just want the raw content. So I can do something like this:

cursor_table1.execute('select part, min, max, unitPrice, now() from table1')
for row in cursor_table1.fetchall():
    cursor_table2.execute('insert into table2 values ' + str(tuple(row)))

The question is how can simply do a select statement from one table and add it to another.

Let me know if I did not describe my question in a clear way and I can add extra info if you want.

4 Answers 4

8

It might be a bit late to answer this question, but I also had the same problem and landed in this page. Now, I happen to have found a different answer and figured that it might be helpful to share it with others who have the same problem.

I have two mysql servers, one on Raspberry Pi and another on a VPS and I had to sync data between these two by reading data on RPi and inserting into the VPS. I've done it the usual way by writing a loop and catching the records one by one and inserting them and it was really slow, it took about 2 minutes for 2000 datasets.

Now I solved this problem by using the executemany function. As for the data I obtained all tuples returned by the select using the fetchall function.

rows = x.fetchall()
y.executemany("insert into table2 (f1, f2, f3) values (%s,%s,%s);", rows)

And it was super fast 😀, it took about 2 seconds for 5000 records.

Sign up to request clarification or add additional context in comments.

1 Comment

close to perfect, but insert statement shoud not be terminated with semicolon ;
3

If you wanted all of the data to pass through Python, you could do the following:

import datetime

cursor_table1.execute('SELECT part, min, max, unitPrice, NOW() from table1')
for row in cursor_table1.fetchall():
    part, min, max, unitPrice, now = row
    cursor_table2.execute("INSERT INTO table2 VALUES (%s,%s,%s,%s,'%s')" % (part, min, max, unitPrice, now.strftime('%Y-%m-%d %H:%M:%S') ))

Comments

1

If you don't need to make any calculation with the data selected from table1 and you are only inserting the data into the other table, then you can rely on mysql and run an insert ... select statement. So the query code would be like this:

cursor_table1.execute('insert into table2 (part, min, max, unitPrice, date) select part, min, max, unitPrice, now() from table1')

EDIT: After knowing that the tables are in different servers, I would suggest to use executemany method to insert the data, as it would run faster.

First build a list of tuples containing all the data to be inserted and then run the executemany query

1 Comment

the problem is that they are not in the same server.. table1 and table2 cannot see eachother..
0

I expect that several answers here will give you trouble if you have more data than you do memory.

Maybe this doesn't count as solving the problem in python, but I do this:

from sh import bash

# ... omitted argparse and table listing ...

for table_name in tables_to_sync:

    dump = 'mysqldump -h{host} -u{user} -p{password} {db} {table} '.format(
            host=args.remote_host,
            user=args.remote_user,
            password=args.remote_password,
            db=args.remote_database,
            table=table_name,
            )

    flags = '--no-create-info --lock-tables=false --set-gtid-purged=OFF '

    condition = '--where=\'id > {begin} and id <= {end}\' > {table}.sql '.format(
            begin=begin,
            end=end,
            table=table_name
            )

    bash(['-c', dump + flags + condition])


    load = 'mysql -u{user} -p{password} {db} < {table}.sql'.format(
            user=args.local_user,
            password=args.local_password,
            db=args.local_database,
            table=table_name
            )

    bash(['-c', load])

If you're worried about performance, you might consider cutting the middleman out entirely and using the federated storage engine--but that too would be a non-python approach.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.