Shailabh Nagar wrote:
>Jay Lan wrote:
>
>
>>>My results confirm the high overhead at these exit rates. In fact,
>>>on the system I used, I see the 649% overhead for the 2200 exits/second case
>>>even higher than yours) but the point is whether that exit rate
>>>is a valid design criteria.
>>>
>>>
>>Agreed. The indeed the deciding factor. The exit rate in the labs
>>does not help answer this question. I need input from our fields.
>>
>>
>
>FWIW, I just spoke to some of the IBM folks working on Websphere (the
>J2EE platform) and they've said that the exit rate is quite low since a thread pool
>is used to reuse threads rather than have them exit. Also, I'm waiting to
>hear from our db2 folks though I suspect its the same story there.
>
Pardon me on my lack of knowledge on the 'thread pool' tool.
Is that a GPL tool? Sounds like a great tool.
>
>>>>And, the per-thread group processing also increase the rate of ENOBUFS
>>>>at the receiver.
>>>>
>>>>
>>>Could you quantify please ? Also, pls list the exit rate at which
>>>this happens.
>>>
>>>
>>I have not posted it nor quantify it because i must bring down the errors
>>count, or we (CSA) have to explore a different way. So any comparison
>>on these number at this point does not really help. Again, if the exit rate
>>is unrealistic, then i need to run a different set of testings.
>>
>
>
>
>>What
>>sleep_factor did you use?
>>
>
>Each thread executed the following code:
>
>void *slow_exit(void *arg)
>{
> int i = (int) arg;
> usleep((n-i)*200);
>}
>
>and I varied the number within between
>700 (resulting in exit rate of 694 in my data)
>and 100 (resulting in the 2283 exit rate)
>
>
>
>>Are those printf() in your new test program
>>essential?
>>
>
>No. I dropped them.
>The test program used is appended below. There were no
>printfs on the non-failure paths.
>
>
>
>>If this type of exit rate can happen even once a day, the surge may cause
>>loss of accounting data of other processes. Again, i do not have data
>>to say either way yet. But i would rather spend time on working on
>>the ENOBUFS error than running all different tests to argue on the
>>per-TG switch.
>>
>>
>
>I suppose the ENOBUFS case has to be handled at userspace anyway
>since it can potentially happen for high thread exit rate cases even if
>only pid data is sent.
>
I will rerun my tests trying to bring exit rate down around 800
and see what happens.
Thanks!
- jay
>
>>Regards,
>> - jay
>>
>>
>>
>>
>
>
>#include <stdio.h>
>#include <stdlib.h>
>#include <sys/types.h>
>#include <unistd.h>
>#include <pthread.h>
>
>int n;
>int barrier=1;
>
>
>void *slow_exit(void *arg)
>{
> long i = (int) arg;
> usleep((n-i)*600);
>}
>
>int main(int argc, char *argv[])
>{
> int i,rc, rep;
> pthread_t *ppthread;
>
> n = 5 ;
> if (argc > 1)
> n = atoi(argv[1]);
>
> rep = 10;
> if (argc > 2)
> rep = atoi(argv[2]);
>
> ppthread = malloc(n * sizeof(pthread_t));
> if (ppthread == NULL) {
> printf("Memory allocation failure\n");
> exit(-1);
> }
>
> while (rep) {
> for (i=0; i<n; i++) {
> rc = pthread_create(&ppthread[i], NULL,
> slow_exit, (void *)i);
> if (rc) {
> printf("Error creating thread %d %d\n", i, rc);
> exit(-1);
> }
> }
> for (i=0; i<n; i++) {
> rc = pthread_join(ppthread[i], NULL);
> if (rc) {
> printf("Error joining thread %d\n", i);
> exit(-1);
> }
> }
> rep--;
> }
>}
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]