Re: F12/13: Ndiswrapper, topdog Wifi, Thunderbird & network degradation...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/29/2010 10:55 AM, Daniel B. Thurman wrote:
> I have a gateway M-6750 laptop which sports a
> Marvell TopDog wifi chip.  I have obtained the
> latest drivers for this chip, and used ndiswrapper
> to hook itself to the XP/Vista topdog driver, and
> it at first appears to be working... although there
> are crash data appearing in the log files but other
> than that, it continues on working.
>
> I can use Firefox, and Pidgin and the network
> appears to work fine.
>
> But the minute I start Thunderbird, it  appears the
> wifi/network performance is severely degradated
> as TB is trys to sync the IMAP server data with
> local data storage.  It knocked out pidgin. gkrellm
> displayed Xorg was hitting CPU hard, Nautilus
> froze for a time, gnome terminal blocked text entry,
> basically, everything appeared eratic.  But given
> several minutes of time for TB to settle down and
> to finish its tasks, the system seemed to return to
> some sense of normalcy - just slightly better.
>
> While this was going on, I thought I'd ping a local
> server to get some sense of what is going on with
> the network since I cannot think of a better diagnostic
> test, and the summary is shown below but watching
> each ping line, there appeared many times, complete
> line display stoppage running several seconds before
> the next display appears.
>
> 42 packets transmitted, 41 received, 2% packet loss, \
>     time 41873ms
> rtt min/avg/max/mdev = 0.754/359.603/2005.013/581.445 \
>     ms, pipe 3
>
> Line by line data, the highest delay was 4000ms at that
> short time of testing.
>
> Is there anything I can do to see why there is network
> degradation and if it is related to ndiswrapper or not?
> I tried wired LAN and there is no network degradation?
>   

This is addtional data:

Wifi pings:
64 bytes from 10.1.0.100: icmp_seq=806 ttl=128 time=0.824 ms
64 bytes from 10.1.0.100: icmp_seq=807 ttl=128 time=2127 ms
64 bytes from 10.1.0.100: icmp_seq=808 ttl=128 time=1127 ms
64 bytes from 10.1.0.100: icmp_seq=809 ttl=128 time=127 ms
64 bytes from 10.1.0.100: icmp_seq=810 ttl=128 time=11999 ms
64 bytes from 10.1.0.100: icmp_seq=811 ttl=128 time=11000 ms
64 bytes from 10.1.0.100: icmp_seq=812 ttl=128 time=10000 ms
64 bytes from 10.1.0.100: icmp_seq=813 ttl=128 time=9000 ms
64 bytes from 10.1.0.100: icmp_seq=814 ttl=128 time=8000 ms
64 bytes from 10.1.0.100: icmp_seq=815 ttl=128 time=7000 ms
64 bytes from 10.1.0.100: icmp_seq=816 ttl=128 time=6000 ms
64 bytes from 10.1.0.100: icmp_seq=817 ttl=128 time=5000 ms
64 bytes from 10.1.0.100: icmp_seq=818 ttl=128 time=4000 ms
64 bytes from 10.1.0.100: icmp_seq=819 ttl=128 time=3000 ms
64 bytes from 10.1.0.100: icmp_seq=820 ttl=128 time=2000 ms
64 bytes from 10.1.0.100: icmp_seq=822 ttl=128 time=2.56 ms
64 bytes from 10.1.0.100: icmp_seq=823 ttl=128 time=2002 ms
64 bytes from 10.1.0.100: icmp_seq=824 ttl=128 time=1002 ms
64 bytes from 10.1.0.100: icmp_seq=825 ttl=128 time=1.46 ms


Wired pings:
64 bytes from 10.1.0.100: icmp_seq=1025 ttl=128 time=0.239 ms
64 bytes from 10.1.0.100: icmp_seq=1026 ttl=128 time=0.233 ms
64 bytes from 10.1.0.100: icmp_seq=1027 ttl=128 time=0.252 ms
64 bytes from 10.1.0.100: icmp_seq=1028 ttl=128 time=0.214 ms
64 bytes from 10.1.0.100: icmp_seq=1029 ttl=128 time=0.233 ms
64 bytes from 10.1.0.100: icmp_seq=1030 ttl=128 time=0.248 ms
64 bytes from 10.1.0.100: icmp_seq=1031 ttl=128 time=0.231 ms
64 bytes from 10.1.0.100: icmp_seq=1032 ttl=128 time=0.200 ms
64 bytes from 10.1.0.100: icmp_seq=1033 ttl=128 time=0.224 ms
64 bytes from 10.1.0.100: icmp_seq=1034 ttl=128 time=0.240 ms
64 bytes from 10.1.0.100: icmp_seq=1035 ttl=128 time=0.202 ms
64 bytes from 10.1.0.100: icmp_seq=1036 ttl=128 time=0.241 ms
64 bytes from 10.1.0.100: icmp_seq=1037 ttl=128 time=0.222 ms
64 bytes from 10.1.0.100: icmp_seq=1038 ttl=128 time=0.235 ms
64 bytes from 10.1.0.100: icmp_seq=1039 ttl=128 time=0.249 ms
64 bytes from 10.1.0.100: icmp_seq=1040 ttl=128 time=0.230 ms
64 bytes from 10.1.0.100: icmp_seq=1041 ttl=128 time=0.239 ms
64 bytes from 10.1.0.100: icmp_seq=1042 ttl=128 time=0.246 ms
64 bytes from 10.1.0.100: icmp_seq=1043 ttl=128 time=0.251 ms

So it appears quite clear that when there is heavy load,
the Ndiswrapper/TopDog is peglegging along sputtering
blood on the trails while the wired connections have no
problems...

Well, if anyone has any idea how I can get the topdog wifi
chip working at better performance - please let me know?

Thanks!

-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux