A customer wanted to understand the conditions under which the ReadÂFile
and WriteÂFile
functions would fail to transfer all of the bytes, and how to detect that this has occurred.
The obvious reason why the ReadÂFile
functions would fail to transfer all the bytes is if there aren’t that many bytes to read. For a disk file, this typically happens because you are reading past the end of the file. You can also get this for other types of file handles: For a pipe in nonblocking mode, there may not be enough bytes in the pipe. Or you might have a message pipe, and the message is smaller than the size of your buffer. Or you might be accessing a device, and the device doesn’t have all the bytes available.
Similarly, the obvious reason why the WriteÂFile
function would fail to transfer all the bytes is if there isn’t enough room for all the bytes. For a disk file, the disk might be full, or you have reached your disk quota. For a pipe in nonblocking mode, a write may be short if there is not enough buffer space in the pipe to hold all the requested data. In all of these cases, you can detect the short write by checking whether the actual number of bytes written is less than the number of bytes requested.
If the number of bytes actually transferred is nonzero, then the ReadÂFile
and WriteÂFile
functions will return success, but the actual number of bytes transferred will be less than the number of bytes requested.
Seems `` in the title is breaking some rss feed parsing of this blog: https://github.com/alexdebril/feed-io/issues/439
Is it possible to fix it?
Sorry for disturbing with little off topic issue.
> the obvious reason why the WriteÂFile function would fail to transfer all the bytes is if there isn’t enough room for all the bytes. For a disk file, the disk might be full, or you have reached your disk quota.
Experiments show that for a disk file the (synchronous) WriteFile writes nothing and returns false when there isn’t enough room for all the bytes.
Probably it’s up to the driver handling IRP_MJ_WRITE. However drivers should report the number of bytes consumed, not the number of bytes produced
Is it also possible for WriteFile to return a value higher than passed?
I happened on something like this on a Tandem mainframe, where a special file type insists on always having an even size. To my (then) surprise writing one byte would return to have written 2.
I think a large part of the problem is the way ReadFile() and WriteFile() work, for pretty much every other read/write function I'm aware of the pattern is writtenCount = write(file,data,writeCount), so you can say 'if( write(file,data,writeCount)!=writeCount) -> error', or if you're more careful, 'if( (result=write(file,data,writeCount))!=writeCount) -> error'. With WriteFile() you just get a boolean OK or !OK, and OK can actually be a silent !OK, so you need to use a nonstandard pattern to check that everything was written. I can see that some implementers would get that wrong.
What about WriteConsole? Can it ever return with nNumberOfCharsWritten < nNumberOfCharsToWrite? It's somewhat like a pipe in blocking mode, so perhaps the answer is "no"?
Very common mistake from what I observed that the number of bytes read is not compared to the number that should’ve been read. Causing less clear errors down the path with uninitialized / zero structure data.
Even in .NET (the `Stream` class especially), which recently got a `ReadExactly` method to combat that mistake (throwing in case of being unable to read enough bytes IIRC). You just need to know about it.