Hi Mike, I've been following this thread with some interest. Mike McCarty wrote:
Well, I don't especially. And I'm open to suggestions. My point is to let the variable-sized stuff which can grow beyond all bounds due to defects in software or boo-boos by the operator (read: me) not contaminate the system.
That's an interesting consideration, but unless you've written all the software you're running, (and even if you did) there's no guarantee that some berserk program won't fill up /, or /usr, or /var, or /etc, or /home... you get the point.
Other than making a separate partition for every single directory on your machine, there's no way to prepare for that contingency.
It's not uncommon these days for systems out there to be configured with one huge root filesystem, since then whatever processes need the space have it wherever they may be. It does make tracking down where problems occur more difficult.
However if you're really concerned about your home-grown application going crazy and filling up the disk, you should obviously either put it in its own partition or put it into a filesystem that can fill up without too much negative impact on the system. (/home or /usr/local may be good choices in this regard)
All significant software has defects. A file system is a significant piece of software. I don't want to push the limits on the FS where the OS is contained. I don't want an unbootable system. I want one partition on a separate disc to contain that stuff. But I see that /home and /tmp (and /var to a lesser extent, though /var/spool is pretty much vulnerable) both need mount points.
True, but remember that nothing that happens in the filesystem will prevent you from booting off the CD/DVD and fixing it, short of massive hardware failure or total filesystem corruption. In that case, the fullness of the underlying filesystem is unimportant.
I don't want two, three, five, you count'em partitions on the disc, because then I'd have to know in advance how much to allocate to each, and each would always be larger than it would have to be. I'd rather have one partition, and let the various pieces dynamically get resized as needed.
Unless this is some mission critical system, I wouldn't worry about it. You have to put a stake in the ground at some point. Worst case, if you find your partitioning scheme doesn't work, you can dump the filesystems off to a backup media, repartition the disk(s) and restore them.
Dynamic, on-the-fly filesystem resizing is the domain of a LVM, but like you I'd rather spend a couple of hours repartitioning the system than worry about getting that to work.
If I were in your situation, and /home is big enough to handle user home directories and your backup application, I'd just make a /home/tmp and use that in my script. There's nothing special about /tmp except some apps use it. Others use /var/tmp or whatever they're coded to do.
I think a sane place to start would be to partition your disk with a small /boot partition at the start, a large / partition, a swap partition, and then either one big /home partition or separate /home and /usr/local for the stuff you want to play with. Or as you're thinking, putting /home and/or /usr/local on partition(s) on a separate disk entirely. Seems entirely reasonable to me. If need be I can show you how I have similar systems laid out.
Also in Fedora, /tmp is not in RAM, and it is not automatically wiped out at boot time. This behavior is common in Solaris.
-Mike