Re: What is the consensus on the best partition scheme and size?/Keeping home separate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 26, 2006 at 09:15:50AM +0200, Lars E. Pettersson wrote:
> On 10/25/2006 11:57 PM, Bill Rugolsky Jr. wrote:
> >I like /tmp to be fast, and /var/tmp to be large, so I create a large swap
> >and use tmpfs for /tmp:
> 
> I have never thought about this. Could you define large. What is 
> generally needed in a situation where /tmp is tmpfs?

It's not a great idea on boxes with little memory, but the typical
new box these days has 512M-1G of RAM or more.  I've long been a fan
of loading up Linux boxes with RAM, since Linux makes excellent use of it.

I generally have swap of 1-3 times RAM.  tmpfs defaults to using half of
RAM, but I sometimes reduce that to a quarter of RAM.  So on a 1GB box,
that's a "mere" 256MB.  Which means, for example, that a CD image will
not fit there.

I find that this works well on headless servers, less well on the
desktop, because desktop apps can create large temporary files, that are
not necessarily short-lived, and any space used in tmpfs is competing
with page cache for those bloated desktop apps.  It's possible to set
TMPDIR=/var/tmp in your GUI session environment, but that kind of defeats
the purpose, and not every app honors that.

Why use tmpfs?

Traditionally, Linux Ext2 /tmp performance was great due to the
asynchronous meta-data writeback of Ext2, so there was no reason to do
anything fancy.  Files could come and go in /tmp, and the typical
effect was to dirty some blocks that got written back every thirty
seconds on average.

Journaling changed this, because Ext3 (and Reiserfs) in the default
data=ordered mode rather eagerly flushes data to disk ahead of meta-data
updates (such as extending the file length of a freshly created file).
The default Ext3 commit interval is five seconds, so if temporary data
persists for a few seconds, a bunch of blocks are flushed to disk:
the allocation bitmap, file data, the file inode, the directory, and
the directory inode.  Deleting the file is going to again require
flushing the allocation bitmap, the file inode, and the directory.
This is exacerbated by having atime updates enabled on /tmp.

I first took note of and understood this while using an early release of
Ext3 for Linux 2.2 (which at first only operated in data=journal mode) on
a laptop with a pathetically slow hard disk, but plenty of RAM.

At the time I toyed with the idea of making /tmp an Ext2 filesystem,
and just running a fresh mkfs on each boot.

When Ext3 was ported to Linux 2.4, using tmpfs seemed the natural choice.

As I mentioned, whether putting /tmp in tmpfs actually performs better
depends on hardware and workload; traditional unix tools that dominate
jobs on our servers generally have a small memory footprint (hundreds
of kilobytes) and operate on streams of data that are quickly consumed,
e.g., sorting two files before comm/join/diff.  In that case, there is
little point in flushing the data to disk.

On the other hand, Ext3 does block-based journaling (not logical journaling),
which has the virtue of only writing a block once, regardless of how
many times it was modified/reused during the commit interval.  So it
is possible that if files in /tmp are very short-lived, the overhead
is not that great, due to reuse.

These days one can use Jens Axboe's "blktrace" and "fio" tools to get
a good handle on real-world performance.

Regards,

	Bill Rugolsky


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux