On Sun, 2011-03-13 at 03:57 -0700, Suvayu Ali wrote: > Lately I have been working with a lot of pdfs and most of these pdfs > are about thousand page long documents (but only ~ 20-30 Mb in file > size). For what it's worth, the filesize of files like PDFs and JPEGs isn't a real good indication of data size. They can, and do, contain compressed data which must be expanded to actually use it. The expanded size is *greatly* bigger than the compressed file size. The expanded size is what has to fit into memory. Either the program can expand the whole thing, when it opens, or just the portion of it that you're currently viewing. Expanding the lot uses a lot of memory (whether it's RAM or swap), only expanding the current bit means you have to wait for the next bit to decode as you scroll through. Which soon gets annoying when trying to find something in a big document. If you're continually dealing with large documents, maybe you want even more RAM. -- [tim@localhost ~]$ uname -r 2.6.27.25-78.2.56.fc9.i686 Don't send private replies to my address, the mailbox is ignored. I read messages from the public lists. -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines