Why are many Windows user interface elements positioned at multiples of 4 or 8 pixels?
Some time ago, we learned that Windows 95 positioned windows at multiples of 8 pixels in order to make bit block transfers more efficient. Is that why many Windows user interface elements are still positioned at multiples of 4 and 8 pixels? (And why four as well as eight?)
No, it’s not about bit block transfer efficiency.
Oh, do designers just have this compulsion about multiples of four and eight, to the point where they can glance at a screen, and if something is not at a multiple of four pixels, they get this odd feeling that the screen is somehow wrong and needs to be fixed?
No, that’s not it either. Or at least I don’t think that’s true.
The reason for positioning elements at multiples of four or eight comes down to screen scaling.
As the screen pixel density increases, Windows applies a scaling factor so that objects on the screen appear at roughly the same visual size. If you have a 192 pixels-per-inch display, it will show a rectangle at the same physical size as the same rectangle on a 96 pixels-per-inch display, but with four times as many pixels (twice horizontally and twice vertically).
I say approximately, because Windows doesn’t go down to the last pixel to match sizes. If your screen pixel density is 193 pixels per inch, Windows will treat it as 192 pixels per inch for scaling purposes. If it had used the exact value of 193 pixels per inch, then all of your screen elements would be scaled by 201%, and that extra 1% is going to make things blurry: A box that started out as 100 pixels wide will end up scaled to 201 pixels wide, and that means that almost none of the source pixels land on an exact pixel boundary when scaled up.
The missing piece of the puzzle is the fact that Windows recommends scaling factors in multiples of 25%: 100%, 125%, 150%, 175%, 200%, 225%, 250%, 300%, 350%, 400%, 450%, 500%.¹
Now it all comes together.
If you arrange for all of your unscaled pixel coordinates to be multiples of 4 (or, for added safety, multiples of 8), then after scaling, they will still be exact integers.
¹ If you look at the
DEVICE_ enumeration, you’ll see values that aren’t multiples of 25%. Those are old values left over from Windows 8 and Windows Phone. They’re not used any more.
So I have a question closely related to this. How does Windows decide what to recommend? On my 28″ 4K display it recommends 150% (reasonable), but on my 43″ 4K display it recommend 300% (not reasonable). A 43″ display has ~96dpi so 100% is perfect.
A 43″ TV is supposed to be used from afar so the effective resolution is supposed to be lower in order to get the bigger elements and text you need for it to be usable from that distance.
I will never understand – what’s the point of having a large high resolution screen if Windows scales everything up so that a 32″ 4k screen displays the same amount of information as a 21″ HD screen? Yes, I know it can be overriden manually, but the whole idea is fundamentally flawed plus it gives software developers a serious headache.
The best scale depends on how the screen identifies itself, as well as the actual size of the screen.
TVs are intended to be viewed from further away, so the scale gets bumped up so you can still read the text. If you’re viewing a TV from the same distance that you’d view a monitor, then yes, you need to change your setting. But if your device says it’s a TV, with an intended DPI, AFAIK, Windows will honor that.
Opposite is true, too. If I have a 4K screen on a laptop, I need that scale bumped up to read it.
There is no point in a 32″ 4K. The reasonable size for 4K is 24″, and 32″ should be 5K. (Assuming the screen distance of one standard arm’s length.)
The intention is that UI elements remain the same physical size on all screens (subtending the same angle), based on the physical size of the screen and the expected distance to the viewer.
The primary purpose of higher DPI is for fonts and other elements to be ‘sharper’ – less pixellated, less use of antialiasing – and thus easier to read.
Not to display more information.
If you don’t want to deal with any of that, the operating system just upscales the whole application so that it remains the original size the developer intended.
You only get into trouble when an application deliberately tells the OS it understands Hi-DPI and needs the native, when it does not in fact understand it.
As resolution goes up, things are supposed to get sharper, not smaller.
Rant: The last desktop monitor with a reasonable pixel density was released in 2015. (DELL 2415U, 4K, 23.8″, which puts it roughly at 200%.) Since then, things have mostly gone downhill. (4K @ 27″ ≈ 175% or 7:4.)
And the point of 200% is that every pixel of a standard 96 DPI bitmap perfectly scales to a 2×2 block without any need for interpolation.