3

I am creating an application that utilizes MySQL and PHP. My current web hosting provider has a MySQL database size limitation of 1 GB, but I am allowed to create many 1 GB databases. Even if was able to find another web hosting provider that allowed larger databases, I wonder how is data integrity and speed affected by larger databases? Is it better to keep databases small in terms of disk size? In other words, what is the best practice method of storing the same data (all text) from thousands of users? I am new to database design and planning. Eventually, I would imagine that a single database with data from thousands of users would grow to be inefficient and optimally the data should be distributed among smaller databases. Do I have this correct?

On a related note, how would my application know when to create another table (or switch to another table that was manually created)? For example, if I had 1 database that filled up with 1 GB of data, I would want my application to continue working without any service delays. How would I control the input of data from 1 table to a second, newly created database?

Similarly, if a user joins the website in 2011 and creates 100 records of information, and thousands of other users do the same, and then the 1 GB database becomes filled. Later on, that original user adds an additional 100 records that are created in another 1 GB database. How would my PHP code know which database to query for the 2 sets of 100 records? Would this be managed automatically in some way on the MySQL end? Would it need to be managed in the PHP code would IF/THEN/ELSE statements? Is this a service that some web hosting providers offer?

4
  • 2
    This is an extraordinarily broad question. Books have been written about your first paragraph ... Commented Dec 9, 2011 at 1:04
  • 1GB is a boatload of space for textual storage, start putting in binary data and all bets are off. Commented Dec 9, 2011 at 1:04
  • 2
    You should find another hosting provider that allows you to keep larger databases. 1GB isn't even a large database. As you note in your third paragraph, splitting into 1GB chunks leads only to insanity. Commented Dec 9, 2011 at 1:05
  • 1
    Also, I note "…storing the same data…". That might be a problem. If the data is all the same, it should only be stored once. You should read up on database design including normalization. Your question is far too broad to reasonably be answered on SO. Answering it would take at least one entire book. Commented Dec 9, 2011 at 1:07

2 Answers 2

1

This is a very abstract question and I'm not sure the generic stackoverflow is the right place to do it.

In any case. What is the best practice method of storing? How about: in a file on disk. Keep in mind that a database is just a glorified file that has fancy 'read' and 'write' commands.

Optimization is hard, you can only ever trade things. CPU for memory usage, read speed for write speed, bulk data storage or speed. (Or get a better host provider and make your databases as large as you want ;) )

To answer your second question, if you do go with your database approach you will need to set up some system to 'migrate' users from a database to another if one gets full. If you reach 80% of 1GB, start migrating users.

Detecting the size of a database is a tricky problem. You could, I suppose look at the RAW files on disk to see how big they are, but perhaps there are more clever ways.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks all for the feedback. I guess I was looking for a starting point. It sounds like I should start reading a book on the topic. If 1 GB is considered a small database, what would be considered a large database? I will have to research web hosting providers to see what they offer.
A large database would be when it's no longer feasible to have it on one machine?
I strongly recommend the O'reilly "High Performance MySQL" as a starter on this one. Frits put you on a good lead, but you'll need a lot more info than what SO can provide. There are entire bookshelves dedicated to this subject.
0

I would suggest using SQLite will the best option in your case. It supports 2 terabytes (2^41 bytes) database and best part is that it requires no server side installation. So it is compatible everywhere. All you need is a library to work with SQLite database.

You can also choose your host without looking on what databases and sizes do they support.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.