On Thu, 19 Aug 2010, JD wrote: > On 08/19/2010 02:15 PM, Michael Hennebry wrote: >> On Thu, 19 Aug 2010, JD wrote: >> >>> Problem comes as Michael explains, that when a process needs a large >>> "physically contiguous" chunk of memory, it might not be available. >>> That said, usually, requests for physically contiguous memory is only >>> needed when wanting to map very large number of DMA pages for >>> doing direct physical I/O. >>> Otherwise, a process itself does not need to have physically contiguous >>> pages. Only the virtual space allocated to that "malloc" or large buffer >>> declaration in a program, is contiguous. >> Why would malloc or a large buffer declaration >> require physically contiguous memory? > It is done in a driver on the process' behalf when doing direct physical > IO . > typically, such blocks of physically contiguous chunks memory are set > aside during boot. > I have also seen special embedded linux drivers that provide an ioctl > to let the process get a set of physically contiguous pages and map the > space > to user virtual space. This is for performance reasons to reduce copying > from user space to kernel space when large amounts of data need to be > moved. > This is not a new idea. it has been around for many years. I first > saw it in Linux back in 1998/1999. Perhaps I misunderstood. Do both of the following necessarily require physically contiguous memory? char fred[69000]; char *greg=malloc(96000); Would they sometimes require physically contiguous memory? -- Michael hennebry@xxxxxxxxxxxxxxxxxxxxx "Pessimist: The glass is half empty. Optimist: The glass is half full. Engineer: The glass is twice as big as it needs to be." -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines