[webkit-dev] size_t vs unsigned in WTF::Vector API ?

Alexey Proskuryakov ap at webkit.org
Sun Nov 30 23:30:12 PST 2014


> 24 нояб. 2014 г., в 1:28, Antti Koivisto <koivisto at iki.fi> написал(а):
> 
> I don't think this is really 32bit vs 64bit platform issue. The vast majority of 64bit systems our users have (that is iOS devices) can't use memory buffers sized anywhere near the 32bit limit even in theory. Also when using Vector auto-grow capabilities (which is really the point of using a vector instead of just allocating a buffer) you need way more memory than the actual data size. Growing a Vector<uint8_t> beyond 4GB has peak allocation of 9GB.

The argument that we should support the same functionality across all devices is a pretty strong one. However it's not as simple as it may sound.

1. The user impact is different. I haven't seen any reports of people trying this on iOS devices, so the importance of fixing the problem on iOS is lower than the importance of fixing it on OS X, where people actually do encounter it in practice.

2. Relative cost of supporting large files on memory constrained devices may be different (e.g. using off_t as Maciej proposed might be too big of a performance hit). When we know it, we can decide whether the ability to upload large files is worth the cost.

3. The scope of work required to make this work is likely different. I suspect that uploading such huge files from iOS Safari is currently essentially impossible for other reasons (we can discuss it in more detail offline).

Also, the argument against huge Vectors is a straw man one. What I'm saying is we have a lot of code that already needs to deal with large sizes, and this code is everywhere. It's not like we have a large data world and a small data world inside WebKit, and can perform magnitude checks at the boundary. It's quite the opposite, everything works together, and the practical solution is to have checks isolated inside Vector.

> Are there any examples of Vectors in the current code base where we would usefully fix an actual problem even in medium term by switching to 64bit storage sizes? I don't think they exists. Cases like file uploads should stream the data or use some mapped-file backed buffer type that is not a Vector.

With modern APIs that are now exposed to JS code, file uploads are no longer an isolated code path. The Blob is sliced, partially read into memory and processed. It is almost certain that YouTube and Google Drive don't attempt to read huge slices, so 32-bit lengths in Vectors and even ArrayBuffers will probably work in practice. But it's not appropriate for all users of Vector to perform the checks, because failure to do them right will have serious consequences.

- Alexey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.webkit.org/pipermail/webkit-dev/attachments/20141130/d3958633/attachment.html>


More information about the webkit-dev mailing list