Hi, all I might be missunderstanding things but... First of all, machines with long pipelines will suffer from cache misses (p4 in this case). Depending on the size copied, (i don't know how large they are so..) can't one run out of cachelines and/or evict more useful cache data? Ie, if it's cached from begining to end, we generally only need 'some of' the begining, the cpu's prefetch should manage the rest. I might, as i said, not know all about things like this and i also suffer from a fever but i still find Hiro's data interesting. Isn't there some way to do the same test for the same time and measure the differences in allround data? to see if we really are punished as bad on accessing the data post copy? (could it be size dependant?) -- Ian Kumlien <pomac () vapor ! com> -- http://pomac.netswarm.net
Attachment:
signature.asc
Description: This is a digitally signed message part
- Follow-Ups:
- Re: [RFC] [PATCH] cache pollution aware __copy_from_user_ll()
- From: Arjan van de Ven <[email protected]>
- Re: [RFC] [PATCH] cache pollution aware __copy_from_user_ll()
- Prev by Date: Re: IT8212/ITE RAID
- Next by Date: Re: BSD jail
- Previous by thread: Re: [RFC] [PATCH] cache pollution aware __copy_from_user_ll()
- Next by thread: Re: [RFC] [PATCH] cache pollution aware __copy_from_user_ll()
- Index(es):