Most modern operating systems (excluding some Windows varieties, but including Linux) have a pre-emptive, segmented, page-faulting virtual memory model. Let's break down that jargon, starting at the end and working backwards.
First, there's memory model. The Linux OS (the kernel) has a clear plan and a clear design on the use of memory. It sticks to that plan, and enforces it. You can't write a Linux program - except for the special case of modifying the kernel - that steps around that plan. In effect, the kernel is a memory policeman. You can not, for example, write a TSR (Terminate- and-Stay-Resident program) under Linux, the way you can under DOS or Windows. So a C program is stored in memory exactly the way the kernel says it should.
Then, there's virtual memory. The kernel's policing strategy includes virtual memory. One aspect of this is that the kernel uses both RAM and disk space to store running programs. So disk space can looks like memory. Except for features like mmap(), this aspect of virtual memory is rarely used, because performance drops quickly when disk space is used to stop your computer from "filling up". The second aspect of VM is much more important. Rather than allocate chunks (pages) of memory to programs, the kernel arranges that every program has access to a continuous space of pretend memory. When a program needs to use a piece of pretend memory, the kernel slides a page of real memory underneath.
The details of which real memory your program happen to use is different every time it's run. The pretend memory that looks real to the program is all you need to care about. It's a perfect and unbreakable illusion, because the kernel's in control. On Windows, a program can scratch at real memory in a direct way, for all kinds of risky effects. That can't be done so easily on Linux, and that's the reason that Linux is mostly free of virusses.
Next comes page faulting. It's not enough that your program must stick with the memory illusion that the kernel provides. The kernel also demands that if your program uses that memory widely, then the kernel will be told. So, page faults are part of most modern computer chips: the CPU tells Linux when you access a bit of memory "over there" that's not too near where you are now. This allows Linux to keep some track of the behaviour of your program, rather than just letting it run rampant inside the memory space it's given.
Let's look at segmented. The memory the Linux kernel gives your program is grouped together into several segments. If you run the "size" program on your C program, or look carefully at the columns of "ps", you can see the sizes of some segments. The main division is between "text" and "data". "text" is where the logic of your program is held. The two don't mix, so the kernel also prevents you from writing "self modifying code" (you can do that under DOS and some Windows versions). This is another source of protection against virusses and badly written programs. The segmented approach is also unbreakable. Your C program, when compiled, is usually in a file format called ELF. When your program runs, the kernel puts different parts of the ELF file format into different memory segments, keeping them separate.
Then comes pre-emptive. Once your program is running, the kernel interrupts it regularly. This is a final security measure that prevents any program from "taking over" memory or the CPU, or any other part of the hardware resources.
This was first published in June 2004