A customer wanted to understand the conditions under which the ReadÂFile
and WriteÂFile
functions would fail to transfer all of the bytes, and how to detect that this has occurred.
The obvious reason why the ReadÂFile
functions would fail to transfer all the bytes is if there aren’t that many bytes to read. For a disk file, this typically happens because you are reading past the end of the file. You can also get this for other types of file handles: For a pipe in nonblocking mode, there may not be enough bytes in the pipe. Or you might have a message pipe, and the message is smaller than the size of your buffer. Or you might be accessing a device, and the device doesn’t have all the bytes available.
Similarly, the obvious reason why the WriteÂFile
function would fail to transfer all the bytes is if there isn’t enough room for all the bytes. For a disk file, the disk might be full, or you have reached your disk quota. For a pipe in nonblocking mode, a write may be short if there is not enough buffer space in the pipe to hold all the requested data. In all of these cases, you can detect the short write by checking whether the actual number of bytes written is less than the number of bytes requested.
If the number of bytes actually transferred is nonzero, then the ReadÂFile
and WriteÂFile
functions will return success, but the actual number of bytes transferred will be less than the number of bytes requested.
What about WriteConsole? Can it ever return with nNumberOfCharsWritten < nNumberOfCharsToWrite? It's somewhat like a pipe in blocking mode, so perhaps the answer is "no"?
Very common mistake from what I observed that the number of bytes read is not compared to the number that should’ve been read. Causing less clear errors down the path with uninitialized / zero structure data.
Even in .NET (the `Stream` class especially), which recently got a `ReadExactly` method to combat that mistake (throwing in case of being unable to read enough bytes IIRC). You just need to know about it.