[webkit-dev] Making browsers faster: Resource Packages
jamesr at google.com
Tue Nov 17 17:53:34 PST 2009
On Tue, Nov 17, 2009 at 5:36 PM, Alexander Limi <limi at mozilla.com> wrote:
> (Adding in some of the people involved with Resource Packages earlier to
> this thread, so they can help me out — I'm just a lowly UI designer, so
> of these questions have to be answered by people that know how browsers
> work. I'm just the messenger. Hope you don't mind, guys, and remember that
> webkit-dev requires you to sign up before you can post.)
> On Tue, Nov 17, 2009 at 2:44 PM, Peter Kasting <pkasting at google.com>
>> I have read the whole document, but I read it quickly, so please do point
>> out places where I've overlooked an obvious response.
> This is what everyone does, so no worries, happy to clarify. 95% of the
> "this is why this won't work" statements are actually answered by the
> article in some way. But I guess I shouldn't be surprised. :)
>> Reduced parallelism is a big concern of mine. Lots of sites make heavy
>> use of resource sharding across many hostnames to take advantage of
>> connections, which this defeats.
> If you package up everything in a single zip file, yes. Realistically, if
> you have a lot of resources, you'd want to spread them out over several
> files to increase parallelism. Also, there's usually resources that are
> page-specific (e.g. belong to the article being rendered). As with
> everything, there are possibilities to use this the wrong way, and
> up everything in one zip file will definitely affect parallelism. Don't do
If the contents are spread across N zip files then the browser still has to
download (at least part of) N files in order to see all the manifests before
it can start fetching other resources. The page-specific resources end up
getting blocked behind all of the manifest downloads. If resource bundles
are allowed to include other resource bundles (and I see nothing in the spec
about this), then each of the N downloads would have to be made serially
since the browser would have to check the manifest of each bundle to see if
it includes any of the remaining ones.
I think this line of concerns would be lesser if the author could declare
the contents of the manifest in the HTML itself to avoid an extra download
or give some sort of explicit signal to the browser that a given resource
was not in any resource bundle. The downside of this is that it increases
the HTML's size even more which is a big loss on browsers that do not
>> I am concerned about the instruction to prefer the packaged resources to
>> any separate resources. This seems to increase the maintenance burden
>> you can never incrementally override the contents of a package, but
>> have to repackage.
> This is something we could look at, of course. There are easy ways to
> invalidate the zip using ETags etc.
>> If an author has resources only used on some pages, then he can either
>> make multiple packages (more maintenance burden and exacerbates problem
>> above), or include everything in one package (may result in downloading
>> excessive resources for pages where clients don't need them).
> I don't think it's unreasonable to expect most big sites to have a
> core of resources they use everywhere. It's important not to try to put
> *everything* in resource packages, just the stuff that should be present
> everywhere (and the specialized thumbnail search result case I mentioned).
>> You note that SPDY has to be implemented by both UAs and web servers, but
>> conversely this proposal needs to be implemented by UAs and _authors_. I
>> would rather burden the guys writing Apache than the guys making
>> and I think if a technique is extremely useful, it's easier to get
>> into Apache than into, say, 50% of the webpages out there.
> There's no damage if you don't do this as a web author. If you care enough
> to do CSS spriting and CSS/JS combining, this gives you a more
> easier, faster solution.
> On Tue, Nov 17, 2009 at 3:00 PM, James Robinson <jamesr at google.com>
>> It seems like a browser will have to essentially stop rendering until
>> it has finished downloading the entire .zip and examined it.
> No. That's why the manifest is there, since it can be read early on, so
> browser doesn't have to block.
> I see a lot of "I don't think this will work" or "I don't think this will
> any faster" here. I guess I should get someone to help me create some
> reasonable benchmarks and show what the difference would be. Maybe Steve
> Souders or someone else that is better at this stuff than me can help out
> with some data.
Yes, actual numbers would be nice to have.
> On Tue, Nov 17, 2009 at 3:02 PM, Peter Kasting <pkasting at google.com>
>> I think mitigating this is why there are optional manifests. I agree
>> if there's no manifest, this is really, really painful. I think
>> should be made mandatory.
> The manifests *are* mandatory. Without a manifest, it won't do anything
> proceed to load the resources as usual), since that would block page
> which is not an option.
> On Tue, Nov 17, 2009 at 3:12 PM, Simon Fraser <simon.fraser at apple.com>
>> If you require a manifest, why not pick an archive format where there's a
>> TOC which is guaranteed to be at the head of the file, which the browser
>> parse without having to wait for the entire file to download?
> If there are other formats that can a) be streamed and unpacked in partial
> state, and b) is common enough that people will actually be able to use
> let me know.
> The tar format is sequential, and (I think) has the header first, but
> doesn't do compression. If you add gzip to that, you can't partially
> which will block page downloads. You could of course argue that using only
> tar (without gzip) could work, and I think we're open to supporting that,
> those assumptions are correct — I haven't looked at the details for that
> Alexander Limi · Firefox User Experience · http://limi.net
> webkit-dev mailing list
> webkit-dev at lists.webkit.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the webkit-dev