1

It's a real php interview question. I know the answer is not just which one is faster. We can answer it in many aspects. Can anyone give me some suggestions please?

2

3 Answers 3

2

files:

  • reading file: fast
  • predicting format/codepage: slow, painstaking, error prone
  • file permissions management
  • multiple write access not possible
  • locking mechanism strategy required
  • parsing file: relatively fast. depending on data complexity
  • file seek of file in directory with many other files(1000+): extremely slow as the OS will iterate through the file list in directory to find your requested file with a binary search(if you're lucky)
  • reading not possible when other is writing
  • threaded fork issues
  • large filesizes if stored in text
  • In short: only use files for static data like configuration files. Never for dynamic data

Database:

  • better management of all of the above
  • compact storage
  • fast lookup engine
  • easy combining of related fact
  • easy to share access with other machines/programs
  • rollback mechanisms built in.
  • Don't use for configuration that remain static.
Sign up to request clarification or add additional context in comments.

6 Comments

Technically a database uses a file for "dynamic data" and it works out just fine if you're careful about it. This is where having well-defined formats and a lot of test code comes into play.
uhu, and how are you going to deal with synchronicity? and data caching? Are you going to to build a file access server to which your application can link to via a port? Or are you coing to code a lot of file lock files that prevent data from writing when another application is reading or writing? how do you handle an application that runs for a long time needs a lot of files for reading, and another application needs writng? Why reinvent the database... there are literally 1000's of formats out there that would be better, more fleshed out than a custom implementation.
The SQLite team explains how to do a lot of this in their extensive documentation on the subject if you're curious how this works. I could explain but they've already done a fantastic job of laying out all the details. Now keep in mind I'm not saying you should do this in PHP, but that you can.
lol, I have read those docs before, and they gave me a headache thinking of how to implement that from scratch code and all the works that come into that. One can do that... if one likes to reinvent the wheel and delights in bugfixing
This is an interview question about hypotheticals, not a recommendation on rolling your own database in PHP.
|
1

This is an example of an interview question with no correct answer. A case could be made for either of these things.

For files you might say they're quick to load, that there's a lot of kernel optimization around fetching them from disk and providing them to a user process, and even more around sending them directly from disk to a socket via something like sendfile. That would be true.

Then for databases you could say that frequently accessed data is stored in memory so there's no I/O round-trip to disk, that could be faster, especially if you're comparing reading parts of a file using a suboptimal structure versus records in a database. This is an algorithmic concern as well.

So it really depends on what kinds of files and what sort of read/write access patterns are involved. To say either of these things is faster is to miss the point of the question.

Comments

0

Reading one file = fast.

Reading many / big files = slow.

Reading singular small entries from database = waste of I/O.

Combining many entries within the database = faster than file accesses.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.