43

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.

e.g.

database1-2011-01-15.sql
database2-2011-01-15.sql
...

There are quite a lot of them actually.

I need to create a shell script or single line probably that will import each database.

I'm running on a Linux Debian machine.

I thinking there is some way to pipe in the results of a ls into some find command or something..

any help and education is much appreciated.

EDIT

So ultimately I want to automatically import one file at a time into the database.

E.g. if I did it manually on one it would be:

mysql -u root -ppassword < database1-2011-01-15.sql

5 Answers 5

79

cat *.sql | mysql? Do you need them in any specific order?

If you have too many to handle this way, then try something like:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

Sign up to request clarification or add additional context in comments.

7 Comments

In the end i used the cat *.sql but broke it down by letter so there wasn't too much info. e.g. cat a*.sql | mysql -u root -ppass
I used ls -1 *.sql | awk '{ print "source",$0 }' | mysql --batch -u {username} -p{password} {dbname} as I have named my sql files sequentially and wanted to execute in that order
@Luracast I used ls -1 *.sql | awk '{ print "source",$0 }' | mysql --batch -u {username} -p {dbname} to get it working. The password needs to be entered in the console when it actually prompts for it, not in the command itself. MySQL doesn't accept password in the command.
@satya I believe that you can enter the password on the command line if you use --password=PA55w0rd instead of -p. I haven't tinkered with MySQL in quite some time, but I'm pretty sure that would work.
@D.Shawley and @satya - you can enter the password on the command line with -p you just omit the space. e.g. mysql -u me -pPA55w0rd
|
20

One-liner to read in all .sql files and imports them:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name.

1 Comment

Like a charm, thanks for this! Worked better than the accepted answer for me.
10

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.

I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

Comments

3

I don't remember the syntax of mysqldump but it will be something like this

 find . -name '*.sql'|xargs mysql ...

3 Comments

from doing a bit of research that is what I'm coming up with too. e.g. find . -name '*.sql' | xargs mysql -u root -ppassword Would that work?
and -h host if you are on a remote server and probably also need to specify database name
the database is referenced in each backupfile so the single line above works already and it is the localhost
3

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.

Here it is on GitHub

It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.

So:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.