Re: adding swap workarounds oom - was: Re: Out of Memory: Killed process 16498 (java).

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 02 2006, Andy Chittenden wrote:
> 
> > On Wed, Mar 01 2006, Andy Chittenden wrote:
> > > > On Wed, Mar 01 2006, Andi Kleen wrote:
> > > > > On Wednesday 01 March 2006 15:34, Jens Axboe wrote:
> > > > > 
> > > > > 
> > > > > > > It shouldn't end up with more, only with less.
> > > > > > 
> > > > > > Sure yes, but if that 'less' is still more than what the 
> > > > driver can
> > > > > > handle, then there's a problem.
> > > > > 
> > > > > The driver needs to handle the full list it passed in. 
> > It's quite
> > > > > possible that the iommu layer is unable to merge anything.
> > > > > 
> > > > > 
> > > > > This isn't the block layer based merging where we guarantee
> > > > > to be able to merge in advance - just lazy after the 
> > fact merging.
> > > > 
> > > > Yes I realize that, I wonder if the bounce patch screwed 
> > something up
> > > > that destroys the block layer merging/accounting. We'll 
> > know when Andy
> > > > posts results that dump the request as well.
> > > 
> > > And here's the dmesg o/p (I had to gather it from 
> > /var/log/kern.log as
> > > there was so much output):
> > 
> > Thanks!
> > 
> > > hda: DMA table too small
> > > ide dma table, 256 entries, bounce pfn 1310720
> > > sg0: dma=830e800, len=4096/0, pfn=1202633
> > > sg1: dma=830f800, len=4096/0, pfn=1202590
> > > sg2: dma=8310800, len=4096/0, pfn=1202548
> > > sg3: dma=8311800, len=4096/0, pfn=1202506
> > 
> > Alright Andi, take a look at this then. We have the same thing again,
> > mid page start of the sg entries. The block layer has done no merging,
> > it's 256 separate segments. The pci_map_sg() output is the 
> > same, except
> > that the IDE driver now needs to split the entries. The 
> > corresponding rq
> > entries for the first four above are:
> > 
> > > request: phys seg 256, hw seg 256, nr_sectors 2048
> > >   bio0: bytes=4096, phys seg 1, hw seg 1
> > >     bvec0: addr=ffff8101259c9000, size=4096, off=0
> > >   bio1: bytes=4096, phys seg 1, hw seg 1
> > >     bvec0: addr=ffff81012599e000, size=4096, off=0
> > >   bio2: bytes=4096, phys seg 1, hw seg 1
> > >     bvec0: addr=ffff810125974000, size=4096, off=0
> > >   bio3: bytes=4096, phys seg 1, hw seg 1
> > >     bvec0: addr=ffff81012594a000, size=4096, off=0
> > 
> > these here. Totally plain 4kb bios strung to the request, no funky
> > offsets or anything. 256 hardware and physical segments, for 
> > a total of
> > a 1MB request.
> > 
> > So what is going wrong? Why does the pci mapping output looks so
> > "strange"?
> > 
> > -- 
> > Jens Axboe
> > 
> 
> So, what's the story? Apart from the performance implications, one thing
> that does concern me is whether this could cause corruptions on disk as
> it appears a cross-page write is being attempted.

I'm waiting for Andi to render an opinion on the problem. It should have
no corruption implications, the PIO path will handle arbitrarily large
requests. I'm assuming the mapped sg table is correct, just odd looking
for some reason.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux