Re: creating live virtual files by concatenation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/25/06, Maciej Soltysiak <[email protected]> wrote:
> Hello!
>
> I have this idea about creating sort of a virtual file.
>
> Let us say I have three text files that contain javascript code:
> tooltip.js
> banner.js
> foo.js
>
> Now let us say I am creating sort of a virtual text file (code.js)
> that is a live-concatenation of these files:
> # concatenate tooltip.js banner.js foo.js code.js
>
> Note I am not talking about the cat(1) utility. I am thinking of
> code.js be always a live concatenated version of these three, so when
> I modify one file, the live-version is also modified.
>
> What puprose I might have? Network-related. Say, I have an HTML file
> that includes these three files in its code.
>
> When a browser downloads the HTML file it will then create three threads
> to download each of those javascript files.
>
> If I had a live-concatenated file, I could reference it in the HTML file
> so that the browser does not have to download three files but just one.
>

If that's what you want to accomplish, then you can easily get around
that in several ways.

1. Simply   $ cat tooltip.js banner.js foo.js > code.js  then include
code.js in your html document and remember to update it when you
change one of the 3 files (or create a script that does it.

2. use Apache's mod_include

3. Use PHP, Perl, python or watever your scripting language of choice
is - here's an example in PHP :

<?php
header('Content-type: text/javascript');
readfile('tooltip.js');
readfile('banner.js');
readfile('foo.js');
?>

save that as javascripts.php then put this in your HTML document :

<script src="javascripts.php" language="javascript"
type="text/javascript"></script>


And there are other ways ...


> This would surely reduce network overhead of downloading the same amount
> of data but within just one connection, reduce resource usage on the client
> and possibly (depending on implementation) reduce the cost of accessing
> three individual files on the server.
>

Negligible I'd say.


> I am CC'ing reiserfs-list because Reiser4 would seem to be the most
> robust filesystem that could have it done.
>
> Any thoughts about the idea itself?

Might be a cute little hack, but I don't think it's a very useful
feature really..


> Would be nice if this idea could inspire some talented hackers here and there.
>
> Best Regards,
> Maciej
>


--
Jesper Juhl <[email protected]>
Don't top-post  http://www.catb.org/~esr/jargon/html/T/top-post.html
Plain text mails only, please      http://www.expita.com/nomime.html
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux