The Applesoft Compiler (TASC): We have the source code, in a sense

Raymond Chen

Back in the early 1980’s, the Apple ][ computer was taking the personal computing world by storm, and Microsoft released a compiler for Applesoft BASIC. That compiler went by the name TASC: The Applesoft Compiler.

TASC was written by one person in his dorm room while studying at MIT. Microsoft licensed the product and hired the author, who spent the summer at the Northup building¹ polishing the code.

As noted on page 61 of the manual:

TASC is a “two-pass” compiler, since it compiles in two major steps. PASS0 simply picks up user inputs and sets up compilation parameters, so it is not really part of the actual compilation process. The Applesoft program TASC runs PASS0.

PASS0 and PASS1 chain to PASS1 and PASS2, respectively. All three passes were written largely in Applesoft, and TASC was used to compile itself.

Chaining refers to a program instructing the system to replace the current program in memory with another program, but preserve the values of some or all variables. Chaining was a common technique when your program got too large to fit into memory all at once, so you broke it into multiple programs that each handed off control to each other.

Even if you hadn’t made it that deep into the manual, you could have figured out that TASC was used to compile itself because TASC used its own runtime library.

Chaining was not a feature native to Applesoft BASIC. It was one of a handful of language extension provided by TASC itself, through the use of magic comments that begin with an exclamation point. This meant that once TASC development reached the point that it required chaining, it could be run only in its compiled form. There was no way to bootstrap it from the interpreter.

As the author added features, he kept hitting the Apple ][‘s 48KB RAM limit and was forced to delete all the comments from the code, and when that wasn’t enough, he resorted to shortening all the important variable names to one character.

Such is the desperation of developing on a system with very tight memory constraints.

Everything was working smoothly, until the author returned to school for a semester. Upon returning to Microsoft, he found that he no longer understood the code. He had a sprawling compiler, with no comments, and unhelpful variable names.²

Yet somehow, he finished TASC, and it shipped.

If you dig through the TASC manual, you can find all sorts of wonderful implementation details.

All of the real work happened in pass 1. This performed code generation and left placeholders for references to other locations like branch targets or variables. Pass 2 consisted of resolving these references and patching up the code. Even though Pass 1 had all the smarts and Pass 2 was just doing clerical work, it was Pass 2 that took the most time because it was I/O-bound, and floppy disks are not speed demons when it comes to random access. Pass 2 was I/O-bound not only because of the need to patch the object code, but also because the table of line numbers was itself written to disk, there not being enough RAM to keep it in memory.

Pass 2 made a pass through the object code once for each variable, since the references to a variable were threaded through the object code as a linked list, similar to how 16-bit Windows threaded external references through the code instead of keeping a separate fixup table. Your program has 100 variables? Then that’s 100 passes through the object code to update references to 100 variables.

When the code generator needed to access a variable, it didn’t do so directly. The 6502 was an 8-bit processor, so none of the variables fit into a register. You needed to call a function to transfer the variable’s value to or from a common staging area.³ Your traditional code generation went something like this:

    ; BASIC code: X = A + B

    ; traditional code generation
    lda #address_of_a_lo
    ldy #address_of_a_hi
    call load_variable_to_accumulator

    lda #address_of_b_lo
    ldy #address_of_b_hi
    call load_variable_to_arg

    call add_arg_to_accumulator

    lda #address_of_x_lo
    ldy #address_of_x_hi
    call store_accumulator_to_variable

To save four bytes at each call site, the address-loading is factored out, and each variable gets a dedicated entry point:

    ; revised code generation
    call load_variable_a_to_accumulator
    call load_variable_b_to_arg
    call add_arg_to_accumulator
    call store_accumulator_to_variable_x

    ...
    ; block of variable access functions

load_variable_a_to_accumulator:
    lda #address_of_a_lo
    ldy #address_of_a_hi
    jmp load_variable_to_accumulator

load_variable_b_to_arg:
    lda #address_of_b_lo
    ldy #address_of_b_hi
    jmp load_variable_to_arg

store_accumulator_to_variable_x:
    lda #address_of_x_lo
    ldy #address_of_x_hi
    jmp store_accumulator_to_variable

This is a net win if each variable is accessed several times, which is a pretty fair assumption.

To save code size further, the access function was itself parameterized on the type of access.

store_arg_to_variable_x:
    ldx #4
    .byte 0x2c      ; swallow next two bytes
load_variable_x_to_arg:
    ldx #3
    .byte 0x2c      ; swallow next two bytes
store_accumulator_to_variable_x:
    ldx #2
    .byte 0x2c      ; swallow next two bytes
load_variable_x_to_accumulator:
    ldx #1
    lda #address_of_x_lo
    ldy #address_of_x_hi
    jmp do_something_with_variable ; uses value in X to decide what to do

We are using the trick of jumping into the middle of an instruction to provide multiple entry points to a common block of code. The author of TASC was very proud of this optimization.

Related reading: Excuse me, has anybody seen the FOCAL interpreter?

¹ This is the same building that was next door to the restaurant that inspired an important variable name in the 16-bit Windows kernel.

² Also, Applesoft BASIC didn’t have local variables. All variables were global. That certainly didn’t help with understanding the code.

³ It has been said that when you write code for the 6502, you’re not so much writing code as you are writing microcode. The CPU itself has only three 8-bit registers (A, X, and Y), and only A can do arithmetic. Anything of more than ephemeral value must be stored in memory. The real working space was the zero page. For example, you might decide that one region of the zero page was the logical “accumulator”, and most of your time was spent transferring values into or out of that accumulator, interspersed with occasionally performing arithmetic on or testing the value in that accumulator.

Perhaps the most famous example of treating the 6502 as microcode is the SWEET16 interpreter, written by Steve Wozniak, which emulated a 16-register 16-bit virtual processor in roughly 300 bytes of memory.

10 comments

Discussion is closed. Login to edit/delete existing comments.

  • Henry Skoglund 0

    The constraints of the 6502 also affected the designs of games, for example did you know why the Apple ][ version of Space Invaders, the Apple Invader game has a “decoration pane” to the right, i.e. why the playing field does not occupy the full width of the screen?
    Because the screen width is 280 pixels but handling more than 256 pixels was too much of a stretch for the 6502 (it would have been too slow).

    • Vincent Weaver 0

      as someone who has done a lot of Apple II hi-res programming recently, while it is a minor pain drawing past 256 pixels, in general it’s one of the least weird issues with that graphics mode. Since the framebuffer is grouped in chunks of 7 pixels (or 3.5 depending on how you look at it) it’s often easier to keep co-ordinates in a divided by 7 framework since any standard putpixel routine would have to do a divide by 7 anyway (not particularly easy to do in a fast or compact way on 6502).

      in the end though if Microsoft was that concerned about address space on the Apple II they wouldn’t have wasted a number of bytes in the ROM for the backwards/EOR encrypted “MICROSOFT!” string they hid just after the sine tables

      • Paradice . 0

        “Wasted”? They were probably the most valuable 10 bytes in the code from MS’ perspective.

  • Paul Topping 0

    I worked for a GUI Computer Aided Design software company in the late 70’s. It was my first job out of college. The system we worked on, like TASC, had a lot more code than would fit into memory on the minicomputers we were using, Data General Eclipse and the like. Until now, I never heard the term “chaining” applied to the technique of swapping in code from disk into a running program. We used the term “core load” to refer to each chunk. (Core as in core memory, of course.) We linked each one (there were hundreds) as if it was a single executable. They all had the same code and data below a certain address and an entry point at that address, or a constant distance above it. The lower, common code chunk obviously contained the code to read the desired core load from disk and jump to the entry point. Interesting times!

    • Paul Topping 0

      I see from Wikipedia that they call the technique I mention, “overlays”. Chaining is reserved for the practice of replacing the whole program. From the distance of time, it all amounts to the same thing.

    • Antonio Rodríguez 0

      In the Apple II, chaining was common for large programs, but it only worked in Wozniak’s Integer Basic. IIRC, that was because Integer Basic stored the variables in the lower end of memory and the program in the higher end, so it was trivial to load a new program without destroying the variables. When Applesoft arrived, it swapped things around, and placed the program at the lower end, followed by the variables. The location of the variables varied with the length of the program, so you had to move all the variables and fix the internal pointers (array and string variables were just descriptors which pointed to the actual data – in fact, Applesoft implementation of strings was very similar to COM’s BSTR [guess where the “B” comes from?], but I’m getting off-track). It wasn’t until the introduction of ProDOS in 1983 that Applesoft gained a CHAIN command. By the time ProDOS became common, most newer Apple II models were being sold with 128 KB of RAM, which allowed you to store program segments in the /RAM/ disk, and also use it for storing and recalling entire variable sets. That allowed you to switch from one program to another seamlessly, in a few seconds and without touching disk. Magic!

  • Georg Rottensteiner 0

    Small memory leads to really fun stuff. For a C64 game (also on the 6510, quite similar to 6502) I had common routines in lower memory, and for the loaded stages (from disk, taking half a minute or more to load in) a “reserved” block of memory. Albeit the zero page was mostly used for a few shared variables, since they can be accessed a bit faster (and with smaller opcodes)
    Thanks to todays cross assembling tools I could share symbols with a little rebuild and was able to directly call the common routines.

    I have the utmost respect for the people that did it back then with all the restraints, waiting for half an hour per compile, etc.

    • Antonio Rodríguez 0

      You worked with tight memory and CPU time constraints, but development used to be way more interactive (and rewarding) than today. Most programs were written in machine code or in Basic, and both were interactive: you could just type (or edit) a sentence or instruction, and run it immediately, or inspect or modify memory and variables and continue execution. And in the case of assembler, you could patch the program after hitting a breakpoint and resume execution after it. You could even do the patching by typing assembly code which got translated immediately to opcodes, thanks to the MiniAssembler built into the firmware. Basic compilers, lite the TASC, gave you the best of both worlds: quick development in the interactive environment of Basic, and, when the program was polished, great performance in the binary. Compiling was quite slow, but you only had to compile once for each release!

  • Dave Gzorple 0

    Was it TASC or the Apple Pascal compiler that was nondeterministic, you got a different output every time you ran it? What you had to do was keep re-running the compile until you got a compiled program that didn’t crash when run. It’s been way too long, but from memory it was TASC because I remember having to wait ages for it to run repeatedly.

  • Al Grant 0

    We used TASC to compile a program that optimized the component placement sequence for surface mount robots. This was mission critical for a large factory, and so large that all the comments were kept in a separate file (one benefit of BASIC having numbers on every line!). Somehow TASC fit in alongside it. TASC took half an hour to compile the program, but it was rock solid.

    Good points about compiled code making much use of calls to helper primitives (3 bytes per primitive). It was not unlike a stack machine. FORTH cut out the JMPs and stored just the addresses of the primitives (2 bytes). Performance was about even with compiled BASIC since all the heavy lifting was done in the primitives. I guess the main performance benefit that came from TASC was eliminating syntax analysis, symbol lookup and line number lookup – with effectively no registers, there would have been little benefit from the mid-end and back-end optimizations we spend so much time on these days.

Feedback usabilla icon