As you mention writing to a normal HDD in your comment to your original post, the likely factor may be the way you write or remap the data.
Ordinary hard drives are based on storing and reading data off of rotating magnetic platters. The data itself is placed in "rings" on the platters.
Due to how they work, hard drives are most efficient if data can be read or written in a sequential manner. When reading or writing to random places in a file or in a file system, performance drops drastically with hard drives, as they have to wait (up to some 6-10+ milliseconds per query) until the next packet of data spins by the read/write heads.
There are many likely culprits, but a few may be
- the file you are working on is itself heavily fragmented. Try defragmenting the hard drive.
- Your method of searching for free chunks may be inefficient (and may also generate lots of random reads for the HDD, especially if you use a parallelized/multithreaded search approach)
- Your chunk/page/block size and/or method of reading/writing these generate a lot of overhead or random reads/writes , maybe because chunk/page size is too small (thus more pages/chunks to handle)
- your method of updating your database of pages/chunks may be dragging you down
- you may have a lot of (meta)data that is processed, even though it might not need to be.
One suggestion for optimizing performance and effeciency when used on HDDs would be to allow fragmentation of data blocks.
While the drawback is that you will have to write a "defragmenter" program (for future maintenance) and implement a block "jump" scheme.
The advantages you get are:
- Considerably more effecient use and allocation of disk space, as large blocks can be spread over numerous smaller areas of free chunks.
- Increased IO speed, as the read/write load is adapted to the hardware in use, and kept as sequential as possible
If you have major speed problems in your deletion step, don't actually delete the data (unless you actually need to)... disable the write protection (your "deletion" step) and overwrite these chunks with new data on the next write.
This is the way that most filesystems and hard drives handle file "deletion" (and I ain't talking about trashcans :o) )