MZ> o. :P 50% is a considerable amount. anyway, good start. ;) Indeed, it is a considerable performance hit, but I haven't really done much in the way of a serious performance analysis. MZ> pure read or read and write mixed? Actually IIRC, that was the write performance only (I used bonnie++ to get the numbers). I believe the read performance is generally good for large blocks. If the block is already mapped for write, then you get the reads for free. I really should resurrect my older tests and see if I can produce something more current :) My previous numbers were gathered by using an additional step of actually rewriting the device-mapper table periodically, using dm-linear to statically map blocks that were mapped for writing. I think that with a better data structure in dm-userspace (i.e. better than a linked-list), performance will be better without the need to constantly suspend and resume the device to change tables. MZ> if this is the scenario, then may be more aggressive mapping can MZ> be used here. Right, so the userspace side may be able to improve performance by mapping blocks in advance. If it is believed that the next several blocks will be written to sequentially, the userspace app can push mappings for those in the same message as the response to the initial block, which would eliminate several additional requests. Perhaps something could be done with certain CoW formats that would allow the userspace app to push a bunch of mappings that it believes might be needed, and then have the kernel report back later which were actually used. In that case, you could reclaim space in the CoW device that you incorrectly predicted would be needed. MZ> u might have interest on this. some developers are working on a MZ> general scsi target layer that pass scsi cdb to user space for MZ> processing while keep data transfer in kernel space. so both of u MZ> will meet same overhead here. so 2 projects might learn from each MZ> other on this. Great! MZ> ps, trivial thing, the userspace_request is frequently used and MZ> can use a slab cache. Ah, ok, good point... thanks ;) -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: [email protected]
Attachment:
pgpmB3AEZF2gX.pgp
Description: PGP signature
- Follow-Ups:
- Re: [dm-devel] [RFC] dm-userspace
- From: Ming Zhang <[email protected]>
- Re: [dm-devel] [RFC] dm-userspace
- References:
- [RFC] dm-userspace
- From: Dan Smith <[email protected]>
- Re: [dm-devel] [RFC] dm-userspace
- From: Ming Zhang <[email protected]>
- Re: [dm-devel] [RFC] dm-userspace
- From: Dan Smith <[email protected]>
- Re: [dm-devel] [RFC] dm-userspace
- From: Ming Zhang <[email protected]>
- [RFC] dm-userspace
- Prev by Date: Re: Simple header cleanups
- Next by Date: [PATCH 2.6] drivers/hwmon/adm9240.c: update my email address
- Previous by thread: Re: [dm-devel] [RFC] dm-userspace
- Next by thread: Re: [dm-devel] [RFC] dm-userspace
- Index(es):