The documentation for System.IO.BinaryReader.Read(byte[], int, int) says it can throw an ArgumentException if
The number of decoded characters to read is greater than count. This can happen if a Unicode decoder returns fallback characters or a surrogate pair.
I don't understand how encoding comes into play at all when I'm asking for raw bytes. Would it interpret the underlying stream's bytes as Unicode and skip things like the byte order mark?
Even if it does something like surrogate pair resolution, how would that end up creating more bytes than I asked for, not less?
Assuming the BinaryReader's encoding does not affect the underlying Stream, does that mean that binaryReader.Read(..) and binaryReader.BaseStream.Read(..) are fundamentally not the same? They seem to be exactly the same in Mono's implementation of BinaryReader. The decoder is not involved in the implementation of this function either.
Is this simply a copy/paste error in the MSDN documentation?
The reason I'm asking all of this is because I just ran into the ArgumentException with this block of code, and of the two documented cases that can throw an ArgumentException it can't be the trivial one:
public void Foo(BinaryReader reader)
{
int bar = reader.ReadInt32();
int baz = reader.ReadInt32();
int bufferSize = 8192;
var buffer = new byte[bufferSize];
int bytesRead = 0;
while ( (bytesRead = reader.Read(buffer, 0, bufferSize)) != 0 )
{
// do something with the read bytes here
...
}
}
Readfunctions just fine. Can you reproduce and show us the stacktrace and message?