Allocated large blocks of memory on 64 bit linux.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I apologise for this slightly off-topic message, but I believe it can
best be answered here, and hope the question may be interesting.

Many libraries have some kind of dynamically sized container (for
example C++'s std::vector). When the container is full a new block of
memory, typically double the original size, is allocated and the old
data copied across.

On a 64 bit architecture, where the memory space is massive, it seems
at first glance a sensible thing to do might be to first make a buffer
of size 4k, and then when this fills up, just straight to something
huge, like 1MB or even 1GB, as the memory space is effectively
infinate compared to the physical memory. Obvious most of this buffer
may never be written to, as the object never grows large enough to
fill it.

What is the overhead of allocating memory which is never used? Is this
a sensible course of action on 64-bit architectures?

Thank you
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux