A look back at memory models in 16-bit MS-DOS

Raymond Chen

In MS-DOS and 16-bit Windows programming, you had to deal with memory models. This terms does not refer to processor architecture memory models (how processors interact with memory) but rather how a program internally organizes itself. The operating system itself doesn’t know anything about application memory models; it’s just a convenient way of talking about how a program deals with different types of code and data.

The terms for the memory models came from the C compiler, since this informed the compiler what type of code to generate. The four basic models fit into a nice table:

  Data pointer size
Near Far
Code pointer size Near Small Compact
Far Medium Large

The 8086 used segmented memory, which means that a pointer consists of two parts: A segment and an offset. A far pointer consists of both the segment and the offset. A near pointer consists of only the offset, with the segment implied.

Once you had more than 64KB of code or more than 64KB of data, you had to switch to far code pointers or far data pointers (respectively) in order to refer to everything you needed.

Most of my programs were Compact, meaning that there wasn’t a lot of code, but there was a lot of data. That’s because the programs I wrote tended to do a lot of data processing.

The Medium model was useful for programs that had a lot of code but not a lot of data. User interface code often fell into this category, because you had to write a lot of code to manage a dialog box, but the result of all that work was just a text string (from an edit box) and some flags (from some check boxes). Many computer games also fell into this category, because you had a lot of game logic, but not a lot of game state.

MS-DOS had an additional memory model known as Tiny, in which the code and data were all combined into a single 64KB segment. This was the memory model required by programs that ended with the .COM extension, and it existed for backward compatibility with CP/M. CP/M ran on the 8080 processor which supported a maximum of 64KB of memory.

Far pointers could access any memory in the 1MB address space of the 8086, but each object was still limited to 64KB because pointer arithmetic was performed only on the offset portion of the pointer. Huge pointers could refer to memory blocks larger than 64KB by adjusting the segment whenever the offset overflowed.¹ Pointer arithmetic with huge pointers was computationally expensive, so you didn’t use them much.²

You weren’t limited to the above memory models. You could make up your own, known as mixed model programming. For example, you could say that most of your program is Small memory model, but there’s one place where you need to access memory outside the default data segment, so you declared an explicit far pointer for that purpose. Similarly, you could define an explicitly far function to move it out of the default code segment.

The memory model specified the default behavior, so if you called, say, memcpy, you got a version whose pointer size matched your memory model. If you had a Small or Medium model program and wanted to copy memory that was outside your default data segment, you could call the _fmemcpy function, which was the same as memcpy except that it took far pointers. (If you used the Compact or Large memory model, then memcpy and _fmemcpy were identical.)

One of my former colleagues is back in school and was talking with his (younger) advisor. Somehow, the topic turned to the 8086 processor. My colleague and his friend explained segmented addressing, near and far pointers, how pointer equality and comparison behaved,³ the mysterious A20 gate, and how the real mode boot sector worked. “The look of horror on his face at how segment registers used to work was priceless.”

I quickly corrected my colleague. “‘Used to work’? They still work that way!”

Bonus chatter: You can see the remnants of 16-bit memory models in the macros NEAR and FAR defined by windows.h. They don’t mean anything any more, but they remain for source code backward compatibility.

¹ In MS-DOS, huge pointers operated by putting the upper 16 bits of the 20-bit address in the segment and putting the remaining 4 bits in the offset. The offset of a huge pointer in MS-DOS was always less than 16.

² In 16-bit Windows, there was a system function called hmemcpy, which copied memory blocks that could be larger than 64KB. And now you know what the h prefix stood for.

³ Segmented memory is a great source of counterexamples. For example, in a segmented memory model, you could have a pointer p to a buffer of size N, and another pointer q could satisfy the inequalities

p <= q && q <= p + N

despite not pointing into the buffer.

This is one of the reasons why the C and C++ programming languages do not specify the result of comparisons between pointers that do not point to elements of the same array.

16 comments

Discussion is closed. Login to edit/delete existing comments.

  • Alex Martin 0

    > MS-DOS
    :]

  • John Wiltshire 0

    Didn’t we all just decide that Tiny model was easiest and got the hardware guys to make it work that way?

    • Alex Martin 0

      Yep. All of these memory models are still possible in protected mode (and still useful on the 286). But when the first OSes written for the 386 showed up, they basically all decided to make all the segments† 4GB large covering the entire (virtual) address space, which is technically the same as the Tiny model. Then when AMD designed x86-64 long mode, they pretty much deleted support for the other models. It doesn’t matter nowadays, because the segments point into the same virtual address space, so the other models really don’t have much benefit except for a very small number of edge cases.

      † In protected mode, segments are regions of memory described by descriptors in the GDT or LDT and identified by selectors loaded into the *S registers; that’s my position on the terminology as backed up by the Intel IA-32 Architecture Software Developer’s Manual, Raymond.

      • Azarien 0

        In pmode there are still segments, you just get to define your own segments. Also, the CS, DS, … registers are always called “segment registers”, never “selector registers”.

  • Marek Knápek 0

    I guess FAR / NEAR pointers is reason why do we have allocators in C++’s STL.

  • Jonathan Wilson 0

    Too bad IBM didn’t go with the Motorola 68000 like some on the hardware team were pushing for, if they had done that we would have had a nice flat memory space and not needed to worry about segments and offsets and code/data sizes and such (yes I know there were very good reasons why IBM didn’t pick the Motorola 🙂

    • Alex Martin 0

      I frequently see people calling the 68000 a 16-bit processor, which irritates me to no end. It may have a 16-bit data bus, but it has 32-bit registers and 32-bit addressing (though truncated to 24 bits externally). It’s a 32-bit processor, just a partially crippled one. By the 68020, they were fully 32-bit, and guess what? No backwards-incompatible 32-bit mode (though a lot of software wouldn’t work with full 32-bit addressing for various reasons). It really was a superior chip to the 8086 by a long way, but of course IBM had to get the PC out as fast as possible and couldn’t wait for Motorola to fix their issues…

    • smf 0

      It’s too bad that Motorola refused to make a version of the 68000 that would work on an 8 bit bus until it was too late (the engineers have admitted they were stubborn about this). IBM wanted an 8 bit data bus.

      It was also too bad that Motorola took so long from announcing the 68000 before it was available in quantity.

      IBM really had no choice.

      • Jason Swain 0

        Meanwhile (probably a little later than the PC development), Burrell Smith was building Mac prototypes, the earlier ones had a 68000 connected to 8-bit RAM and I/O. He used some external logic to do this, but it wasn’t hard. This was for the 64k version, when they increased the RAM to 128k there was no reason to stick with 8-bit data bus.

  • Neil Rashbrook 0

    The MSVC compiler didn’t really like the memory model I wanted to use for Win16 DLLs, which used 16-bit pointers for both data and stack but to different segments (which segment was which was encoded in the type of the pointer).

  • Mystery Man 0

    I remember. When programs on the large model didn’t work, I used to tell people to make sure EMM386.EXE was loaded. Trouble is, I don’t remember why. I may have noticed a trend in how they worked.

    • David Streeter 0

      > EMM386.exe

      *twitches uncontrollably*

      *sits in corner and gibbers incomprehensibly about expanded vs extended memory*

  • Daniel Neely 0

    This post takes me back. I did mixed mode programming in highschool using Turbo Pascal. The code for my never finished top scrolling shooter expanded above 64k so I had multiple code segments. 64,000 byte graphics buffers (320x240x8bit color) not playing nicely with 64k memory segments were what originally forced me to start using pointers for non-school work purposes. IIRC outside of those all my data did fit into a single segment.

  • Alexis Ryan 0

    Surely the NEAR and FAR macros can be removed by now

  • Scarlet Manuka 0

    Ooooh, can we talk about overlays now?

Feedback usabilla icon