0

When using python, can SQLite be used as production database to manage say 10,000 databases files (each in a separate file -- of size 500[MB])?

Only one thread will be used to write data to the database files (no concurrency).

Are there alternatives libraries that can work better / faster / more reliably?

3
  • 1
    I don't think anything can deal with that sanely. Even 10k tables in one database is in the "big hairy ERP" region - 10k different databases sounds like a nightmare. How do you do a schema upgrade properly? Commented Jun 5, 2011 at 18:51
  • @Mat: there will not be any schema upgrade; these will basically be individual files that each will be accessed / handled separately. No joins / no schema changes; just individual files. Commented Jun 5, 2011 at 19:11
  • @Mat, he is not talking about 10k tables, just 10k different SQLite files in the filesystem, something which is completely unaffected by SQLite, really. Since he is only working with one of the files at a single time, we are basically talking "will SQLite work with a 500MB database in a reliable and safe manner?" here Commented Jun 5, 2011 at 21:12

2 Answers 2

7

Maybe you'll look at this page titled "appropriate uses for sqlite". To quote:

The basic rule of thumb for when it is appropriate to use SQLite is this: Use SQLite in situations where simplicity of administration, implementation, and maintenance are more important than the countless complex features that enterprise database engines provide. As it turns out, situations where simplicity is the better choice are more common than many people realize.

Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().

Sign up to request clarification or add additional context in comments.

1 Comment

thanks. It appears to be a good fit. Do you know if it can handle 10,000 database files of size ~ 500[MB]? I will only be working with one file at a time, yet, would like to get decent performance. Most important of all, are there any integrity issues? (files get corrupted etc)?
1

If you are dealing with one SQLite database at a time, there is no limit on the number of database files you can handle. Just make sure you clean up properly (close each database connection) before you open the next database and you should see no problems whatsoever.

Opening one database at a time makes this no different from using just one database file. SQLite is a great format, I have yet to see any integrity issues, and I've abused the format quite extensively, including rsyncing an updated database in place before re-opening the database (the overwritten database was only ever read from), or perform a complete clear and rebuild from a second process (wrapped in one big transaction, again the first process only ever read from it). The one thing you shouldn't do with it is store it on a network share.

As for size limits, take a look at http://www.sqlite.org/limits.html; the SQLite project takes it's testing serious and the database limits are included. With a maximum BLOB size of 2 GB this means they test databases that are at least that large in their test suite, so databases of up to 500MB should be a breeze to deal with.

2 Comments

thank you. Can SQLite easily handle a database file of size 500[MB]?
See sqlite.org/limits.html; the maximum BLOB size is 2GB, this limit is well tested and also 4 times your upper database size. I'd say that's a resounding "yes!".

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.