I am running one of our 3U appliances with dual 9500 Series 3Ware
Controllers. The unit is an online demo system accessible from
the internet via SSH to the public for solera networks Linux appliance
demos running the DSFS file system:
(ncurses)
demo.soleranetworks.com
Account: demo
password: demo
(text ncurses)
demo.soleranetworks.com
Account: demo-text
password: demo
I have allocated 393,216 bio buffers I statically maintain in a chain
and am running the dsfs file system with 3 x gigabit links fully
saturated. meta-data
increases the write sizes to 720 MB/Second on dual 9500 controllers with
8 drives each (total of 16) 7200 RPM Drives. I am seeing some
congestion and bursting on the bio chains as they are submitted. I am
not aware of anyone pushing 2.6 to these limits at present with this
type of architecture. I have split
the kernel address space 3GB/1GB 3-kernel 1-user space in order to
create enough memory to run this file system with 2GB of cache.
DSFS dynamically generates html status files form within the file
system. When the system gets somewhat behind, I am seeing bursts > 1
GB/Second which exceeds the theoretical limit of the bus. I have a
timer function that runs every second and profiles the I/O throughput
created by DSFS with bio submissions and captured packets. I am asking
if there is clock skew at these data rates with use of the timer
functions. The system appears to be sustaining 1GB/Second throughput on
dual controllers. I have verified through data rates the system is
sustaining 800 megabytes/second with these 1GB/S bursts. I am curious
if there is potentially timer skew at these higher rates since I am
having a hard time accepting that I can push 1GB/S through a bus rated
at only 850 MB/S for DMA based transfers. The unit is accessible by
the general public, since its a demo unit andwe are unconcerned about
folks getting on the system. Folks are welcome to look and if anyone
has any thoughts on this, please let me know. I am concerned that the
timer functions are not always ending on second boundries, which would
explain the higher reported numbers. Windows 2003 does not approach
these performance numbers, BTW, so Linux appears to win on raw
performance for vertical File System Apps.
dsfs file system mounted at /var/ftp can be viewed:
ftp://demo.soleranetworks.com/
Stats pages generated from dsfs:
capture stats:
ftp://demo.soleranetworks.com/stats/capture.html
storage stats:
ftp://demo.soleranetworks.com/stats/storage.html
dsfs cache stats:
ftp://demo.soleranetworks.com/stats/cache.html
network interface stats:
ftp://demo.soleranetworks.com/stats/network.html
virtual network interface maps:
ftp://demo.soleranetworks.com/stats/virtual.html
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]