I currently have code for perl that looks like this:
@valid = grep { defined($column_mapping{ $headers[$_] }) } 0 .. $#headers;
...
my $sql = sprintf 'INSERT INTO tablename ( %s ) VALUES ( %s )',
join( ',', map { $column_mapping{$_} } @headers[@valid] ),
join( ',', ('?') x scalar @valid);
my $sth = $dbh->prepare($sql);
...
my @row = split /,/, <INPUT>;
$sth->execute( @row[@valid] );
(Taken from mob's answer to a previous question.)
That is basically dynamically building a sql insert statement from csv data, and only allowing the csv data with proper headers from my column mapping to be picked.
I have been looking for examples on how to do an insert statment with multiple rows of data at once.
My perl script needs to run around a few hundred million insert statments, and doing it one at a time seems really slow, especially since the server I am running it on only has 6gb of ram and a slowish internet connection.
Is there a way I can upload more than 1 row at a time of data? So one insert statment uploads maybe 50 rows, or 100 rows at once? I cant find out how with perl DBI.
(?, ?, ?)a number of times based on the size of the array." That shows how to generate a single insert statement for multiple rows.prepareinside a loop; you haven't shown all of your code, so I don't know if this is the case for you, but you should make sure you're only callingprepareonce for a given query.LOAD DATA INFILEis likely to be faster than a series of compound inserts. To do this, parse your raw data as you're doing now and write it out to a CSV file, then load it withLOAD DATA INFILE. At which point this approach becomes faster than compound inserts depends on your application and your database setup, but you can get a significant performance boost this way.