Re: Fedora Core 2 Update: kernel-2.6.6-1.435

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 18 Jun 2004, Rodolfo J. Paiz wrote:
At 07:32 6/18/2004, Ed K. wrote:
when transferring many small files, http with persistent connections will
blow away an ftp mirror, any day any night.

What is why the days are numbered. for small files, use http. for large files (iso), use bittorrent. Plus the bonus of http headers are very
important for caching hierarchies.


{corrections from anyone......?}

Just the comment that you seem to be focusing strongly on the typical use of a single user downloading something large and of high demand, like an iso. As an example, I have a small company which receives a single upload from each one of 40 customers every month, since they are sending us graphics to be published in a magazine. The uploads are usually 3 to 10 files, each ranging from 20 to 100MB, so on average we get 600MB from each customer.


Today we use FTP based on the perception I had that FTP is significantly better at transferring large files and maximizing bandwidth. Using max b/w is also important since most of the customers (by virtue of Murphy's law) wait until the very last minute before sending. :-) So HTTP is less efficient you say, and BitTorrent is not applicable.

I'm left again with FTP, aren't I? Or scp, but honestly I don't even know which mechanism scp uses.


scp uses TCP, which would give it the same bandwidth arbitration that http
uses. If you experiment, I would expect that the deviation from average between FTP/SCP/HTTP would be in the noise.


You have just pointed out that generalities _always_ fail for some case.
HTTP is impractical for uploading files larger then 4MB, it requires additional
configuration of the servers, and caches that should be in the way.

As you know, ftp is insecure, but you may have secured the files some other way.
I would suggest using pure-ftpd, it has great anonymous upload features. In fact, I use ftp upload too, only because the client requires it.


So, to summarize, FTP is not significantly better at transferring large files
and maximizing bandwidth. It is just easy to use and established. I don't know
if FTP supports continuing a disconnected upload attempt. (In other words I haven't seen it in all my readings) There might be simple custom solution
that would help in the event of a disconnection that would prove to be more beneficial than using FTP.


Just to give you some other methods, on my current project, I have to transfer
about 6GB of files, each one no larger then 100K from one computer to the next.
FTP/SCP/HTTP are all slow. Using tar and mbuffer, I can transfer the files
sustaining 11MB/s. That is the maximum the 100MB switch will move the packets.
9 minutes 18 seconds.

Different problems, different solutions.

ed

Security on the internet is impossible without strong, open, and unhindered encryption.



[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux