The History of OS Migration

Operating system vendors face this problem once or twice a decade: They need to migrate their user base from their old operating system to their very different new one, or they need to switch from one CPU architecture to another one, and they want to enable users to run old applications unmodified, and help developers port their applications to the new OS. Let us look at how this has been done in the last 3 decades, looking at DOS/Windows, Macintosh, Amiga and Palm.

CP/M to PC-DOS/MS-DOS

CP/M was an 8 bit operating system by Digital Research that ran on all kinds of Intel 8080-based systems. Seattle Computer Products “86-DOS”, which later became MS-DOS (called “PC-DOS” on IBM machines), was a clone of CP/M, but for the Intel 8086, much like DR’s own CP/M-86 (which later became DR-DOS).

While not binary compatible, the Intel 8086 was “assembly source compatible” with the 8080, which meant that it was easily possible to convert 8 bit 8080/Z80 assembly source into 8086 assembly, since the two architectures were very similar (backward-compatible memory model; one register set could be mapped onto the other) and only the instruction encoding was different.

Since MS-DOS implemented the same ABI and memory map, it was source-code compatible with CP/M. On a CP/M system, which could access a total of 64 KB of memory, the region from 0x0000 to 0x0100 (Zero Page) was reserved to the operating system and contained, among other things, the command line arguments. The running application was located from 0x0100 up, and the operating system was at the top of memory, with the application stack growing down from just below the start of the OS. The memory model of the 8086 partitions memory into (overlapping) chunks of contiguous 64 KB, so one of these segments is basically a virtual 8080 machine. MS-DOS “.COM” files are executables below 64 KB that are loaded at address 0x0100 of such a segment. 0x0000-0x0100 is called the Program Segment Prefix and is very similar to the CP/M zero page, the stack grows down from the end of code segment, and the operating system resides in a different segment.

Because of the high compatibility of the CPUs and the ABIs, a port of a program from CP/M to DOS was pretty painless. Such a direct port could only support up to 64 KB of memory (like WordStar 3.0 from 1982), but it was also possible to maintain a single source base for both CP/M and MS-DOS just by using a few macros and two different assemblers.

DOS 2.0 then introduced more powerful APIs (file handles instead of FCBs, subdirectories, relocatable .EXE files), obsoleting most of the CP/M API – but DOS kept CP/M compatibility until the last version.

CP/M to PC-DOS/MS-DOS
Change New CPU
New OS codebase
Running new applications Native
Running old applications Not supported
Running old drivers Not supported
Porting applications High level of source/ABI compatibility

DOS to Windows

Microsoft Windows was first architected as a graphical shell on top of MS-DOS: All device and filesystem access was done by making DOS API calls, so all MS-DOS drivers ran natively, and Windows could use them. DOS applications could still be used by just exiting Windows.

Windows/386 2.1 changed this model, as it was a real operating system kernel that ran a number of “virtual 8086 mode” (V86) virtual machines side by side: One for the MS-DOS operating system and one per Windows application. The DOS VM was used by Windows to call out to device drivers and the filesystem, so it was basically a driver compatibility environment running inside a VM. The user could start any number of additional DOS VMs to run DOS applications, and each of these contained a copy of DOS. Windows hooked memory accesses to screen RAM as well as some system calls to route them to the Windows graphics driver or through the “root” DOS VM.

Windows 3.x started using Windows-native drivers that replaced calls into the DOS VM, and had the DOS VM call up to Windows for certain device accesses. The standard Windows 95 installation didn’t use the DOS VM for drivers or the filesystem at all, but could do so if necessary.

DOS was not only a compatibilty environment for old drivers and applications, it was also the command line of Windows, so when Windows 95 introduced long file names, it trapped DOS API calls to provide a new interface for this functionality to command line tools.

MS-DOS to Windows
Change New OS
Running new applications Native
Running old applications Virtual machine with old OS
Running old drivers Virtual machine with old OS
Porting applications No migration path

DOS to Windows NT

Windows NT was never based on DOS, but still allowed running MS-DOS applications since its first version, NT 3.1. Like non-NT Windows, it runs DOS applications in V86 mode. But instead of running a copy of MS-DOS, using its logic and trapping its device accesses, NT just runs the application in V86 mode and traps all system calls and I/O accesses and maps them to NT API calls. It is not a virtual machine: V86 mode is merely used to provide the memory model necessary to support DOS applications.

One common misconception is that the Windows NT command line is a “DOS box”: The command line interpreter and its support tools are native NT applications, and a Virtual DOS Machine (NTVDM) is not started until a real DOS program is launched from the command line.

DOS to Windows NT
Change New OS
Running new applications Native
Running old applications API reimplementation
Running old drivers Not supported
Porting applications No migration path

Windows 3.1 (Win16) to Windows 95 (Win32)

Since the release of Windows NT 3.1 in 1993, it was clear that it would replace classic Windows eventually, but although it had the same look-and-feel and good Win16 and decent DOS compatibility, every current version of NT typically required quite high-end hardware. The migration from Windows to Windows NT was done by slowly making Windows more like Windows NT, and when they were similar enough, and even low-end computers were powerful enough to run NT well, switching the users to the new codebase.

The big step to make Windows more like Windows NT was supporting NT’s 32 bit Win32 API: The first step was the free “Win32S” update to Windows 3.1, which provided a subset (thus the “S”) of the Win32 API on classic Windows. Win32S extended the Windows kernel to create a 32 bit address space for all 32 bit applications (NT had a separtate address space for each application). It also provided ported versions of some new NT libraries (e.g. RICHED32.DLL), as well as 32 bit DLLs that accepted the low-level Win32 API calls (“GDI” and “USER”) and forwarded them to the Win16 system (“thunking”).

Windows 95 included this functionality by default, ran 32 bit applications in separate address spaces, supported more of the Win32 API and included several 32 bit core applications (like Explorer), but a good chunk of the core system was still 16 bit. With Windows 95, most developers switched to writing 32 bit applications, making them instantly available as native applications on Windows NT.

Windows 3.1 (Win16) to Windows 95 (Win32)
Change New CPU mode/bitness
Running new applications Thunking
Running old applications Native
Running old drivers Native
Porting applications High level of source compatibility

Windows 9X to Windows NT

The second step in the migration from 16 bit Windows to Windows NT was the switch from Windows ME to the NT-based Windows XP in 2001. Windows NT (2000/XP/…) was a fully 32 bit operating system with the Win32 API, but it also allowed running 16 bit Windows applications by forwarding their Win16 API calls to the Win32 libraries (thunking).

The driver model of Windows NT 3.1/3.5/4.0 (“Windows NT Driver Model”) and classic Windows (“VxD”) was different, so Windows 98 (successor of Windows 95) and Windows 2000 (successor to Windows NT 4.0) both supported the new “Windows Driver Model”. A single driver could now work on both operating systems, but each OS continued to support their original driver mode.

When Microsoft switched the home users to the NT codebase, most current applications, games and drivers worked on Windows XP as well. It was only the system tools that had to be rewritten.

Windows 9X to Windows NT
Change New OS
Running new applications Native
Running old applications Win16: Thunking
Win32: Native
Running old drivers Providing the same API for the old OS
Porting applications High level of source compatibility
Providing the same API for the old OS

Windows i386 (Win32) to Windows x64/x86_64 (Win64)

The switch from 32 bit Windows to 64 bit Windows is currently in progress: Windows XP was the first version to be available for AMD64/Intel64, and both Windows Vista and Windows 7 are available as both 32 bit and 64 bit editions. On the 64 bit edition, the kernel is 64 bit native, and so are all libraries and most applications. The 32 bit API is still supported using the “WOW64” (Windows-on-Windows 64-bit) subsystem: A 32 bit application links against all 32 bit libraries, but the low-level API calls it wants to make get translated by the WOW64 DLL into their 64 bit counterparts.

Since drivers run in the same address space as the kernel, 32 bit drivers could not be easily supported on 64 bit Windows, and thus are not. Support for DOS and Win16 applications was dropped on 64 bit Windows.

Windows i386 (Win32) to Windows x64/x86_64 (Win64)
Change New CPU mode/bitness
Running new applications Native
Running old applications Thunking
Running old drivers Not supported
Porting applications High level of source compatibility

Macintosh on 68K to Macintosh on PowerPC

Apple switched their computers from using Motorola 68K processors to Motorola/IBM PowerPC processors between 1994 and 1996. Since the Macintosh operating system, called System 7 at that time, was mostly written in 68K assembly, it could not be easily converted into a PowerPC operating system. Instead, most of the system was run in emulation: The new “nanokernel” handled and dispatched interrupts and did some basic memory management to abstract away the PowerPC, and the tightly integrated 68K emulator ran the old operating system code, which was modified to hook into the nanokernel for interrupts and memory management. So System 7.1.2 for PowerPC was basically a paravirtualized operating system running inside emulation on top of a very thin hypervisor.

The first version of Mac OS for PowerPC ran most of the operating system inside 68K emulation, even drivers, but some performance-sensitive code was native. The executable loader detected binaries with PowerPC code in them and could run them natively inside the same context. Most communication to the OS APIs went back through the emulator. Later versions of Mac OS replaced more and more of the 68K code with PowerPC code.

Macintosh on 68K to Macintosh on PowerPC
Change New CPU
Running new applications Thunking
Running old applications Paravirtualized old OS in emulator
Running old drivers Paravirtualized old OS in emulator
Porting applications High level of source compatibility

Classic Mac OS to Mac OS X

Just like Microsoft switched from Windows to Windows NT, Apple switched from Classic Mac OS to Mac OS X. While Classic Mac OS was a hacky OS with cooperative multitasking and without memory protection that still ran some of the OS code in 68K emulation, Mac OS X was based on NEXTSTEP, a modern UNIX-like operating system with a completely different API.

When Apple decided to migrate towards a new operating system, they ported the system libraries of Classic Mac OS (“Toolbox”) to Mac OS X, omitting the calls that could not be supported on the modern OS (and replacing them with alternatives), and called the new API “Carbon”. They provided the same API for (Classic) Mac OS 8.1 in 1998, so developers could already update their applications for OS X, while maintaining compatibility with Classic Mac OS. When Mac OS X was introduced in 2001, binaries of “carbonized” applications would then run unmodified on both operating systems. This is similar to the “make Windows more like Windows NT” approach by Microsoft.

But since not all applications were expected to exist as carbonized versions with the introduction of OS X, the new operating system also contained a virtual machine called “Classic” or “Blue Box” in which the unmodified Mac OS 9 was run together with any number of legacy applications. Hooks were installed inside the VM to route network and filesystem requests to the host OS, and window manager integration allowed the two desktop environments to blend almost seamlessly together.

Classic Mac OS to Mac OS X
Change New OS
Running new applications Native
Running old applications Classic: Virtual machine with old OS
Carbon: Intermediate API for both systems
Running old drivers Virtual machine with old OS
Porting applications Intermediate API for both systems

Mac OS X on PowerPC to Mac OS X on Intel

In 2005, Apple announced that they would switch CPUs a second time, this time away from the PowerPC towards the Intel i386 architecture. Being a modern operating system mostly written in C and Objective C, it could be easily ported to i386 – actually, Apple claims that they have always maintained i386 versions of the whole operating systems since the first release.

In order to run legacy applications that had not yet been ported to i386, Apple included the emulator “Rosetta” with the operating system; but this time, it was not tighltly integrated into the kernel as with the 68K to PowerPC switch, but the kernel only added support to run an external recompiler with the application as a parameter whenever a PowerPC application was launched. Rosetta translated all application code as well as the libraries it linked against, and interfaced to the native OS kernel.

Mac OS X on PowerPC to Mac OS X on Intel
Change New CPU
Running new applications Native
Running old applications User mode emulator
Running old drivers Not supported
Porting applications High level of source compatibility

Mac OS X (32 bit) to Mac OS X (64 bit)

The next switch for Apple was the migration from 32 bit Intel (i386) to 64 bit (x86_64) in Mac OS X 10.4 in 2006. Although the whole operating system could have been ported to 64, as it was done with Windows, Apple decided to take an approach which is closer to the Windows 95 one: The kernel stayed 32 bit, but got support for 64 bit user applications. All applications and drivers on the system were still 32 bit, but some system libraries were also available as ported 64 bit versions. A 64 bit application thus linked against 64 bit libraries, and made 64 bit syscalls that were converted to 32 bit calls inside the kernel.

Mac OS X 10.5 then provided all libraries in 64 bit versions, but the kernel remained 32 bit. OS X 10.6 will be the first version with a 64 bit kernel, requiring new 64 bit drivers.

Mac OS X (32 bit) to Mac OS X (64 bit)
Change New CPU mode/bitness
Running new applications Thunking
Running old applications Native
Running old drivers Native
Porting applications Carbon: Not supported
Cocoa: High level of source compatibility

AmigaOS on 68K to AmigaOS on PowerPC

The Amiga platform was the same OS on the same 68K CPU architecture in the Commodore days between 1985 and 1994, but third-party manufacturers offered PowerPC CPU upgrade boards since 1997. The closed source operating system could not be ported to PowerPC by these third parties, so AmigaOS continued to run on the 68K CPU, and an extension in the binary loader detected PowerPC code and handed it off to the other CPU. All system calls then went though a thunking library back to the 68K.

AmigaOS 4 (2006) is a native port of AmigaOS to the PowerPC, which was a major effort, since a lot of operating system code had to be converted from BCPL to C first. 68K application support is done by emulating the binary code and interfacing it to the native API.

AmigaOS on 68K to AmigaOS on PowerPC (3.x)
Change New CPU
Running new applications Thunking (new CPU)
Running old applications Native (old CPU)
Running old drivers Native
Porting applications High level of source compatibility
AmigaOS on 68K to AmigaOS on PowerPC (4.0)
Change New CPU
Running new applications Native
Running old applications User mode emulator
Running old drivers Not supported
Porting applications High level of source compatibility

Palm OS on 68K to Palm OS on ARM

Palm switched from 68K processors to the ARM architecture with Palm OS 5 in 2002. The operating system was ported to ARM, and a the “Palm Application Compatibility Environment” (“PACE”) 68K emulator was included to run old applications. But Palm discouraged developers from switching to ARM code and did not even provide an environment in the OS to run native ARM applications. They claimed that most applications on Palm OS did most of their work in the native operating system code anyway, so they would not see a significant speedup.

But for applications that were heavily CPU bound and contained compression or crypto code, Palm provided a way to run small chunks of native ARM code inside a 68K application. These “ARMlets” (later called “PNOlets” for “Palm Native Object”) could be called from 68K code and provided a minimal interface with a single integer for input and output, so the developer had to write the code to pass extra parameters in structs and care about endianness and alignment. ARM code could neither call back to 68K code, nor could it call operating systems API directly.

Sticking with 68K code for most applications practically meant having a virtual architecture for user mode programs, not unlike Java or .NET. The switch to ARM was mostly unnoticed by developers, and this approach could have allowed Palm to switch architectures again in the future with little effort.

Palm OS on 68K to Palm OS on ARM
Change New CPU
Running new applications Not supported (PNOlet for subroutines)
Running old applications User mode emulation
Running old drivers Not supported
Porting applications Not supported (PNOlet for subroutines)

Summary

Let us summarize the OS and CPU switches and how the different vendors approached their respective problems.

Bitness

Switching to a new CPU mode is the easiest change in an operating system, since old applications code can run natively, and API calls can be translated. An operating system can either stay in the old bitness and convert calls from new applications for the old system, or move up to the new bitness and convert calls fom old applications. There are also two ways where to hook the calls: An operating systems could hook into high-level API calls like creating a GUI window, but this is hard to do, since the high-level API is typically very wide, and it is very hard to get a converter for so many calls correct and compatible. Alternatively, the OS can convert low-level system calls. With this solution, the interface is quite narrow. But since all old applications link against the old libraries and new applications against the new libraries, equivalent libraries will end up twice in memory if the user runs old and new applications concurrently.

New CPU mode/bitness
OS Old mode New mode Thunking direction Thunking level
Windows 16 bit 32 bit new to old library
Windows NT 32 bit 64 bit old to new kernel
Mac OS X 32 bit 64 bit new to old kernel

For the 16 to 32 bit switch in Windows, the operating system stayed 16 bit and converted 32 bit calls into 16 bit calls at the API level. When Windows NT switched from 32 bit to 64 bit, the whole OS became 64 bit, and low-level kernel calls were converted for old applications. The same switch was done differently on Mac OS X: The OS stayed 32 bit, and 64 bit calls were translated at the kernel level.

The solutions of Windows NT and Mac OS X are quite similar, as they both run all 32 bit code with 32 bit libraries, and all 64 bit code with 64 bit libraries, and it is just the kernel that is different. For Windows, this has the advantage of having access to more than 4 GB in kernel mode, as well as some speedup from the new registers in x86_64 long mode, and for Mac OS X, it has the advantage of running old 32 bit drivers unmodified. (In a second step, Mac OS X later switched to a 64 bit kernel.)

CPU

It is harder to switch to a new CPU, because the new CPU just cannot run the old application code any more, and some operating systems cannot be easily adapted to a new CPU.

New CPU
OS Old CPU New CPU Running old apps Thunking level
CP/M, DOS 8080/Z80 8086 Developer has to recompile
Macintosh 68K PowerPC Run OS and app in emulation
Mac OS X PowerPC i386 User mode emulation kernel
Amiga 68K PowerPC Dual-CPU thunking library
Palm 68K ARM User mode emulation library

Mac OS X Intel and Palm OS ARM were written in a platform independent enough way so that they could be ported to the new architecture. They both included recompilers that ran the old code. This is the easy way. Amiga OS could not be ported, because the source code was not available. So systems had both CPUs, the original operating system code ran on the old CPU, and new applications ran on the new CPU, switching back to the old CPU for system calls.

For Classic Macintosh (68K to PowerPC), the OS source code was available, but could not be ported easily, so it was done similarly to the Amiga, although with a single CPU: Most of the old operating system ran inside emulation, and new applications ran natively, calling back into the emulator for system calls.

DOS was a reimplementation of the old OS by a different company that did not support running old binary code. Instead, it made the developer recompile their code.

OS

Switching to a new operating system, but keeping your users and developers is the hardest of all switches.

New OS
Old OS New OS Running old apps
CP/M DOS Compatible API
DOS Windows Virtual machine with old OS
DOS Windows NT API emulation
Windows 9X Windows NT Compatible API
Mac OS Mac OS X Classic: Virtual machine with old OS
Carbon: Compatible API

The approach to take depends on the plans with the API of the old operating system. If the API is good enough to be worth supporting in the new OS, the new OS should just have the same API. This has been the case for the CP/M to DOS and the Windows 9X to Windows NT migrations. In a way, this was also true for Classic Mac OS to Mac OS X, but in this case, Carbon was not the main API of the new OS, but one of three APIs (Carbon, Cocoa, Java; while everything but Cocoa is pretty much deprectated today).

If the old API is not worth maintaining on the new OS, but it is important that old applications run very well, it makes sense to run the old operating system in a virtual machine, together with its applications. This was done by Windows to run DOS applications as well as Mac OS X to run old Mac OS applications.

If the OS interface of the old operating system is relatively small and easy, or perfect accuracy is not necessary, the best solution might be API emulation, i.e. hooking the system calls of the old application and mapping them into the new operating system. This was done by Windows NT to run DOS applications, and was only moderately compatible.

Conclusion

It is interesting how different the solutions for all these OS migrations were: There have been hardly two instances that followed the same approach. The reason for it might be that the situations were all subtly different, and a lot of time was spend to work out the perfect solution for the specific problem.

But there is a trend that can be seen: As systems are getting more modern, solutions tend to get less hacky, and migrations tend to happen in many small steps instead of few big steps. Modern operating systems like Windows NT and Mac OS X can be ported to new architectures quite easily, emulators help running old applications, and thunking can be used to interface with the native syscall interface. Because of the abstractions in a system, an operating system can be ported to a new architecture or a new CPU bitness in steps, with some parts in the new system, and others still in the old system. These abstractions also allow developers swapping out complete subsystems or rearchitecting parts of the operating system without much user impact. It is getting more and more convenient for OS developers – but unfortunately, it’s also getting less exciting.

Links

1234

16 thoughts on “The History of OS Migration”

  1. I worked at Metrowerks on the CodeWarrior for Palm OS tools at the time of the Palm OS 5 transition. I did a lot of work on supporting PNOs. A PNO could call back into the core OS: when it was called by the OS, it got both a user pointer passed from the application and a callback pointer which could be used to call 68K OS functions through the trap mechanism. It was awkward, as the app had to reformat the arguments in the 68K endian order, but it was doable and we shipped a large macro library with CW to support its use.

    Reply
  2. @Pradeep: Yes, OS X was ported to ARM for the iPhone, but there never was a migration path from Mac to iPhone to run old applications or port old applications over.

    Reply
  3. “AmigaOS 4 (2006) is a native port of AmigaOS to the PowerPC, which was a major effort, since a lot of operating system code had to be converted from BCPL to C first.”

    This is incorrect, BCPL program compatibility was retained up to 4.0 but BCPL code was dropped for C and assembler in 2.0 already (1990 IIRC), in fact the ARP project was already doing this during 1.3 days (1987-1989-ish). Porting was only a major effort in the sense that two guys normally doing game ports from PC to Mac did it over 6 years or so (starting in 2001, not counting developer betas), the scope of the project (e.g. AmigaOS didn’t have a TCP stack of its own, no USB support, there was no parent company established 3D support etc.) and due to OS design – chief of which were a historical lack of memory protection (no MMU available in the early days, messaging based microkernel), porting workarounds for chipset dependencies (e.g. Picasso96 to retarget graphics) and no native virtual memory. The latter has been bolted on but proper memory protection would need to be incorporated in a complete redesign (and it would be a mistake not to do this platform independently, or at least switch to x86).

    “Amiga OS could not be ported, because the source code was not available.”

    This is also incorrect, Hyperion was commissioned by Amiga Inc. and AFAIK had full access to the source code for the 4.0 port. The developers of MorphOS (who incidentally also did the phase 5 PPC boards) had no official access to the sourcecode. If you mean ported to the PPC boards in the nineties, it made no sense, at least commercially. Not much technically either due to the minor speed gap between 68k and PPC back then (060/50 vs. 604/233 at most), which would mostly have been negated by issues such as context switch overhead: the main 68k processor was removed and plugged into the PPC daughterboard, which then interfaced to the motherboard. The lowest level control including access to the chipset and thus mainboard memory/graphics/storage/the Zorro bus controlling all preexisting expansions such as NICs remained with the 68k. The daughtercard with the PPC could be extended with graphics and memory of its own, alleviating some of the bottlenecks, but a full porting effort didn’t make sense until the PPC clone market started up. Sadly this was killed off by Apple, so a full port had to wait another 10 years (Motorola was in fact ready to support an AmigaOS port to PPC in 1996, but this also went under with Escom).

    Reply
  4. Sorry, Off topic, but I would like to subscribe to your Atom feed, but it doesn’t work when I try. I use Opera’s built in feedreader. I’ve never had problems with Opera 9.5 or later’s feedreader before. I suspect the XML might be very large due to having the entire contents of your posts in it.

    Reply
  5. It seems that the problem with the feed is that the & in the isn’t properly encoded.

    Reply
  6. There was also a possible architecture transition for windows that happened but was somewhat stillborn: Windows 2000 and later exist in a special 64-bit “Itanium Edition” form. Initially support for x86 programs was provided by a hardware ‘v86’ style user mode with WOW64 syscall thunking when required. Newer Itanium processors drop support for hardware execution of x86 code, and the work is done instead by the IA32EL, a binary blob recompiler/emulator that sits in user mode. The wow64 thunking is still required.

    An interesting factor in all of this was page size. When the IA64 port of NT was done, various page sizes were tried (Itanium is flexible wrt to pte format and address translation) and they settled on 8K being more efficient. It was thought at the time that 4K pages were required for compatibility with 32-bit apps (Itanium was the future for everything then, so they wanted to be as compatible as possible). So the IA64 Windows kernel supports splitting up single WOW64 page mappings into two regions with differing protections. There’s a bunch of nasty little ifdefs all around the memory manager for this.

    Reply
  7. I’d second what aTmosh said on the Amiga case. Additionally i’d append that actually on OS4 + classic hardware, running old drivers is *supported* unless they use DMA capabilities of some hardware, AFAIK.

    Additionally i’d include MorphOS to the story, which is a fully API/ABI compliant PowerPC replacement to AmigaOS, and in some levels it’s even more compatibile with the old AmigaOS 3.x, (and some de-facto standard enhancements to it) than AmigaOS 4.x, (while offering very similar advancements and new features over AOS 3.x) and it even predated and influenced AmigaOS 4.x in most of the 68k – PPC integration features and style. And as aTmosh said, it was created by the people, who brought PowerPC to the Amiga in the first place.

    Reply
  8. Hyperion didn’t hava access to AmigaOS sourcecode. No one has, as most of the sources are lost 🙁

    Reply
  9. @Bring:

    Where on earth did you get that? As part of the IP the sourcecode represents value, I’m sure it’s available as long as you wave a big enough cheque and have enough resources to negotiate the legal quagmire of disputed trademark/IP ownership changes. The biggest difficulty is probably drilling down to whoever is authorized to sell/licence it. Just because some hobby project scale developers can’t get it (officially) doesn’t mean it’s ‘completely lost’ a la Doctor Who.

    Even if it were, the APIs are well documented, Commodore/CATS themselves did an excellent job on that (ROM Kernel Reference Manual etc.), plus you have superb third party books like the Guru book by Ralph Babel (especially if the new 3.1/4.0 edition is ever published). AmigaOS is probably one of the most extensible and hacked OSes around, there are tons of people with intimate knowledge of the innards that could even make source superfluous.

    Reply
  10. @Bring

    You are wrong. The full OS3.1 sources is handled by Olaf Barthel, who
    is part of the OS4 development team.
    It’s some parts of the OS3.5 and OS3.9 sources that are lost or in
    other ways not available for the OS4 dev team.

    @aTmosh

    Regarding BCPL.. AmigaOS developers have said that they had to
    remove alot of BCPL code for AmigaOS 4. AmigaOS 3.x also compiled
    with (if I remember correctly) seven different compilers with atleast four
    different languages. Hyperion changed that to mainly one language, C
    with some smaller parts done with PPC assembler, and one compiler,
    GCC.
    It would probably have been faster and easier to go the MorphOS way
    and start from fresh than to port.
    Commodore probably stopped using BCPL quite early but still left a hell
    of lot of this code in parts that were not changed or just had small changes done to it over the years.

    I wouldn’t call the virtual memory being “bolted on” as you put it.
    But yes it was added as a feature in AmigaOS 4.1.

    Reply
  11. @Samwel:

    I used bolted on because it’s not transparent (I consider it a hack). AFAIK you have to explicitly use the new API for allocating memory, old/proprietary software for which source is not available will not benefit, which is unfortunately most of the software catalogue.

    Also,

    “System 7.1.2 for PowerPC was basically a paravirtualized operating system running inside emulation”

    I remember running a stripped down MacOS 8 under ShapeShifter on my Amiga 4000 with 68060, and a full System 7.1, the first PPC MacOS was 8.1, not 7.1.

    Incidentally, an Amiga with a 68060 expansion card could run MacOS faster than any real 68k Macintosh since the fastest 68k Macs had 68040 CPUs 🙂

    Reply

Leave a Comment