A little late to the game; I hope this is on point.
I assuming you're using the Firebird .NET provider, which is a C# implementation that does not ride on top of the native fbclient.dll. Unfortunately, it does not provide a streaming interface to BLOBs, which would allow for reading potentially huge data in chunks without blowing out memory.
Instead, you use the FbDataReader.GetBytes() method to read the data, and it all has to fit in memory. GetBytes takes a user-provided buffer and stuffs the BLOB data in the position referenced, and it returns the number of bytes it actually copied (which could be less than the full size).
Passing a null buffer to GetBytes returns you the full size of the BLOB (but no data!) so you can reallocate as needed.
Here we assume you have an INT for field #0 (not interesting) and the BLOB for #1, and this naive implementation should take care of it:
// temp buffer for all BLOBs, reallocated as needed
byte [] blobbuffer = new byte[512];
while (reader.Read())
{
int id = reader.GetInt32(0); // read first field
// get bytes required for this BLOB
long n = reader.GetBytes(
i: 1, // field number
dataIndex: 0,
buffer: null, // no buffer = size check only
bufferIndex: 0,
length: 0);
// extend buffer if needed
if (n > blobbuffer.Length)
blobbuffer = new byte[n];
// read again into nominally "big enough" buffer
n = reader.GetBytes(1, 0, blobbuffer, 0, blobbuffer.Length);
// Now: <n> bytes of <blobbuffer> has your data. Go at it.
}
It's possible to optimize this somewhat, but the Firebird .NET provider really needs a streaming BLOB interface like the native fbclient.dll offers.