On Sep 7, 2008, at 12:55 PM, Richard Erlacher wrote:
>>>>   This is actually rather rare.  Yes, it does happen (CP/M's
>>>> assembler comes to mind) but it's not the norm.
>>>
>>> There were several assemblers associated with CP/M, namely ASM,
>>> MASM, M80,
>>> MAC, and RMAC.  IIRC, RMAC produced rel files that fed L80, the
>>> linker.
>>
>> Yes, and while I'm a CP/M fan and probably always will be, that's
>> thirty-year-old technology.  Things have evolved considerably since
>> then.
>>
> While some new strategies for database processing have evolved,  
> those old
> assemblers, linkers, etc, all seem to work just as well today as  
> they ever
> did.

   Of course they do, contrary to popular belief.  And in other  
pursuits, I, like you, use and enjoy those tools with some  
regularity.  However, they are no longer used in mainstream  
development, as you're aware.

>   Since MCU programs tend to be quite small, there's no need for such
> things as assemblers, linkers, etc, to evolve.  They already work  
> very well,
> and needn't be "fixed."

   Well, yes and no.  Software development techniques have evolved.   
I've noticed this in my own Z80-related work...My methodologies for  
developing assembly language code for Z80 processors today differ  
significantly from those I used in the mid-1980s.

   In particular, I've become somewhat obsessed with code modularity,  
reusability, and unit testing.  When I write a Z80 assembler routine,  
I try to generalize it, modularize it, and write a small test harness  
for it.  Then I stick it in a "library" (just a subdirectory) of  
other such reusable routines with a small .txt file describing its  
use.  I've found this to be of great benefit.

   However, this would totally break down if I were to use an .asm - 
 > .hex toolchain that couldn't relocate code.  The only way (that I  
can think of) to get around this is to use a "wrapper" file  
containing an origin statement and a bunch of include directives to  
pull in the subroutine files that I want to use.  I (personally) find  
this to be a very ugly solution, so I prefer the separate assembler/ 
linker approach, as opposed to the integrated approach which you  
describe.

>> Yes, that part surprises me.  How would one build a project with
>> multiple source files, without needing to explicitly specify the code
>> origin in each one, and without assembling all of the files in a
>> single invocation of the assembler (if it can do that)?
>>
> The key, I guess, is to recall that MCU's have very small code  
> memory, so
> it's not inconceivable that one would write the code as a single  
> module.

   I agree.  However, *in my opinion*, that doesn't provide enough  
modularity for clarity, ease of debugging, maintainability, and code  
reuse.  Even the original circa-1982 8051 implementations have 64KB  
address space.  I've done quite a bit of work with the Intel 8052AH- 
BASIC interpreter, which is structured as two source files...the  
interpreter and the floating-point routines.  Both files are very  
large...5700+ lines for the interpreter and 1600+ for the FP  
routines.  It is (for me) very difficult to work on for this reason.

   It's worth noting that, as can be learned by studying the files,  
that BASIC system was originally written and maintained within Intel  
as several separate source code files and coalesced into two files  
before public release.  I have no idea why.

>> Yes, the assembler suite (if memory serves) originally came from a
>> PDP-11 implementation running under RT-11.  That gave me a big  
>> smile. :)
>>
> It's quite common to see traces of DEC software in assemblers,  
> linkers, etc,
> as so many people had access to the sources, and so many people  
> actually
> understood how they work.

   Makes sense.  It's really good to see that stuff living on in  
places other than the machine rooms of the historic  
preservationists.  My PDP-11s work wonderfully (just got a new one  
last week, an 11/24 in near-mint condition!) but I generally don't  
use them for everyday work.  Hmm...maybe I should, just because I  
can. =)

>   CP/M was said, by some, to be based on OS-8.  I
> gave away my 8" source diskettes for OS-8 some years back, since I  
> didn't
> have the DF32's for which it was apparently designed.

   OS-8 will work with lots of different system disks...DF32s,  
RX01/02s, RK03/05s, and even TU56 DECtape drives.  My main 8/e system  
uses a TU56 as its system device.

>   My understanding was
> that, when CP/M was being designed, half the winos on skid row  
> could "drive"
> OS8, as it was so widely used.  That made its console interface very
> attractive.  I'm not sure that's true, however.

   That seems to make sense.  I'd not say they're that close, but  
there are definitely similarities.

              -Dave

-- 
Dave McGuire
Port Charlotte, FL



-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Sdcc-user mailing list
Sdcc-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sdcc-user

Reply via email to