Accelerating File Transfer: Quicklinks
Overview | Part 1: Compression | Part 2: Minimizing Data Sets | Part 3: Multiple Streams | Part 4: Alternatives (like FileCatalyst!) to FTP/TCP
As discussed on Feb 23
, one of the most common questions people ask is, "How do you accelerate file transfer? Are the files compressed or something like that?" The response is that compression is only part of our offering, and an optional part at that. But that doesn't mean it's not important. You just have to know exactly what compression can offer in terms of benefits, as well as be aware of what it cannot do.
In terms of acceleration, the benefit is quite simple: if there is less data to be sent, the transfer will be "virtually" faster. A 10MB compressed file will take roughly half as long to send as the original 20MB file, even though the actual line speed is the same. This "virtual" speed gain will depend on the compression ratio, ie. how compressible the files are. Text-based files such as word processor documents tend to be more compressible than binary files such as executables; compressed filetypes such as JPG and MP3 cannot be compressed further at all.
A common way to use compression in file transfer is to compress (zip, rar, stuffit, etc) each file before it is sent, and then decompress the file at the destination. Not a bad method, and it can potentially yield results. But frankly if this is all an acceleration solution is doing with compression, you're almost as well off just using a script built in-house. Instead, an acceleration solution should build upon the concept of compression and add value to it. Here are a few of the things FileCatalyst
does in terms of compression:
- on-the fly compression: Rather than compressing each separate complete file, data is compressed at the block level. A block is compressed, sent, uncompressed, and appended to the file being built on the destination. By compressing on-the-fly there is no wait for the transfer to start, and when the last byte arrives the transfer is also finished—no waiting for a final decompression operation. The only caveat to this method is that it takes up more CPU than scenarios not involving compression.
- compress as single archive: This option becomes useful if you are transferring many small files. By creating just one archive for hundreds of files, you save a huge amount of set-up and tear-down. Saving these operations saves some virtual speed. But then there's raw speed: the nature of small files means that they often arrive at the destination before line speed is reached; by sending a single archive (ie. one larger file!), the transfer can ramp up to line speed. To maximize efficiency, FileCatalyst's progressive transfers also allow the file to start transferring even before the compression is finished.
- algorithm options: although the default compression method will work in any circumstance, you may choose from a few different algorithm options that may work just a bit better with particular file types.
There are some creative ways to use compression, with different FileCatalyst client applications/applets offering a number of ways to capitalize on the benefits of compression. These various options mean that you do not simply select "Use THIS kind of compression" globally throughout your application, but are free to pick and choose at a granular level, depending on the needs of any particular task. These options will be explored in later articles.
"Is your file accleration some sort of compression?" No, but compression sure can help!