Something that catches out everybody the first time they encounter it is that Windows device-independent bitmaps are upside down, in the sense that the first row of pixels corresponds to the bottom scan line of the image, and subsequent rows of pixels continue upward, with the final row of pixels corresponding to the topmost scan line.
Rather than calling these bitmaps upside down bitmaps, the less judgmental term is to call them bottom-up bitmaps.
Okay, so why are device-independent bitmaps bottom-up?
For compatibility with OS/2.
In OS/2 Presentation Manager, the origin of the graphics coordinate space is at the bottom left corner of the screen, with the y-coordinate increasing as you go toward the top of the screen. This is the mathematically correct coordinate system, which removes a lot of confusion when you start doing mathematics with your graphics.
It is my understanding that computer graphics involves a lot of mathematics.
If you align the graphics coordinate space with mathematical convention, then all the mathematical formulas carry over without having to introduce occasional negative signs to account for the reverse handedness. Rotation angles are always counter-clockwise. Transformation matrices operate in the way you learned in linear algebra class. Everything works out great.
Of course, if you aren’t one of those deeply mathematical people, then having the y-coordinate increase upward means that the order of reading is opposite the order of increasing coordinates.
Which is weird.
Windows 2.0 and OS/2 started out as good friends, and Windows 2.0 adopted OS/2’s bitmap format in order to foster interoperability between them. As we all know, that friendship soured over time, but the file format decision lingers on.
protip: if you create the DIB with negative height then it will be right side up!
That’s both brilliant and a little unsettling.
Well, this confirms something I have assumed for decades. It was an easy one: OS/2 Presentation Manager was one of the few packages of software where the origin of coordinates was "mathematically correct" (another one being Postscript). And Windows DIBs, also upside-down (so much for political correctness ;-) ), were created around the same time (the second half of the 80s) by the same company (Microsoft). You just had to add two and two. I...
Mac also has the screen coordinate system mathematically correct. I work on cross-platform software and so adding y-axis inversion handling to my code is second nature.
I understand that, way back when CRTs were combined with computers, saving CPU cycles (or complicated chip designs) was important. But, to me, the whole goal of technology is to make it more human. Having a programming model which doesn't match how humans think simply because of...
Mathematically correct, but not inverse of the actual hardware.
The graphics buffer have always(*) started from the top-left of the screen. That has been true since monochrome text only displays, since that technology was based on CRT televisions. The televisions also start tracing from the top. Why? I am not sure, but can speculate (human eyes also trace from top to bottom, why? that one I have no real idea).
So APIs are left between being correct...
If I was to guess it’s because that’s how cathode ray tubes work. It’s a lot more convenient for the graphics chip to send things in the order the monitor is expecting. MDA, CGA, and EGA while digital were all from top left. VGA while analog follows that convention for a lot of reasons involving shared hardware. So I find it ironic that OS/2 bucked IBM’s own trend of following the electron beam.
Oh, this isn’t an official statement either. I’m just stating what seems to me to be obvious (just as obvious as it was to you).
Fun fact: The Unity game engine also has Y coordinates increasing upwards, this is done in order to be consistent between how it handles 2D and 3D coordinates (Y in 3D is the vertical axis, increasing upwards).
Of course, that does NOT extend to internal raw texture data, which you end up manipulating if you want to import data from, say, bitmap data, or skiasharp or something. The memory layout for that is from the top...
Why didn’t they instead make the X axis point downwards and Y axis to the right?
I remember learning this about 15 years ago. As an embedded guy who writes PC software only to help myself on the embedded side - I wrote a C console program to take a color Windows bitmap, convert to monochrome, and the write it out into a C array .h file to use on the embedded side on an LCD screen. I can't remember if I flipped it memory or just iterated over the array...