Re: [patch] sys_epoll_wait() timeout saga ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 24 Sep 2005, Willy Tarreau wrote:

Yes it can, and that's why I said that gcc should send a warning when
comparing an int with something too large for an int. But I should have
forced the constant to be evaluated as long long. At the moment, the
constant cannot overflow, but it can reach a value so high that
timeout/1000 will never reach it. Example :
 MAX_SCHEDULE_TIMEOUT=LONG_MAX
 HZ=250
 timeout=LONG_MAX-1
 => timeout/1000 < MAX_SCHEDULE_TIMEOUT/HZ
 but (timeout * HZ + 999) / 1000 will still overflow !

So I finally think that the safest test would be to avoid the timeout
range which can overflow in the computation, using something like this
(but which will limit the timeout to 49 days on HZ=1000 machines) :

+       jtimeout = timeout < 0 || \
+                    timeout >= (1000ULL * MAX_SCHEDULE_TIMEOUT / HZ) || \
+                    timeout >= (LONG_MAX / HZ - 1000) ?
                  MAX_SCHEDULE_TIMEOUT: (timeout * HZ + 999) / 1000;

as both are constants, they can be optimized. Otherwise, we can resort to
using a MAX() macro to reduce this to only one test which will catch all
corner cases.

Using the MIN() macro would be better so we have a single check, and the compiler optimize that automatically. Or we can force 'timeout * HZ' to use ULL math. I don't think it makes a lot of difference for something that is in a likely sleep path ;)


- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux