On Thursday 15 February 2007 21:21, Carl Love wrote:
> I have done some quick measurements. The above method limits the loop
> to at most 2^16 iterations. Based on running the algorithm in user
> space, it takes about 3ms of computation time to do the loop 2^16 times.
>
> At the vary least, we need to put the resched in say every 10,000
> iterations which would be about every 0.5ms. Should we do a resched
> more often?
Yes, just to be on the safe side, I'd suggest to do it every 1000
iterations.
> Additionally we could up the size of the table to 512 which would reduce
> the maximum time to about 1.5ms. What do people think about increasing
> the table size?
No, that won't help too much. I'd say 256 or 128 entries is the most
we should have.
> As for using a logarithmic spacing of the precomputed values, this
> approach means that the space between the precomputed values at the high
> end would be much larger then 2^14, assuming 256 precomputed values.
> That means it could take much longer then 3ms to get the needed LFSR
> value for a large N. By evenly spacing the precomputed values, we can
> ensure that for all N it will take less then 3ms to get the value.
> Personally, I am more comfortable with a hard limit on the compute time
> then a variable time that could get much bigger then the 1ms threshold
> that Arnd wants for resched. Any thoughts?
When using precomputed values on a logarithmic scale, I'd recommend
just rounding to the closest value and accepting the relative inaccuracy,
instead of using the precomputed value as the base and then calculating
from there.
Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]