[Webkit-unassigned] [Bug 234920] ImageBitmap has poor performance on iOS

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Jan 10 12:26:44 PST 2022


https://bugs.webkit.org/show_bug.cgi?id=234920

--- Comment #7 from Kimmo Kinnunen <kkinnunen at apple.com> ---
(In reply to Simon Taylor from comment #6)
> - Our code uses the WebGL API directly, but our users want to make use of
> WebGL engines (Three, Babylon, PlayCanvas etc) for their rendering. Engines
> often maintain a cache of underlying WebGL state to avoid un-necessarily
> resetting bits that are unchanged.

But this doesn't explain why you want to snapshot video to ImageBitmap and then use the ImageBitmap in two different contexts.

Currently you can snapshot the video to your processing context via just texImage2D.


> - On browsers that support OffscreenCanvas, our processing context can run
> on a worker. The frame is only needed on one context at a time, so can be
> transferred to the worker and back again for the renderer. Converting video
> -> ImageBitmap can only happen on the main thread, hence this seems the
> correct "intermediate" representation.

IIRC currently WebKit does not support OffscreenCanvas, so strictly speaking converting video to ImageBuffer because you want to send ImageBuffer to the offscreen canvas is not that valid reason?

> > Currently in WebKit ImageBitmap is not implemented to be an optimisation
> > across multiple Context2D and WebGL elements.
> 
> Without wishing to sound rude - what's the intention of the current
> implementation then?

I think the main use-case is to convert a blob to an image to be drawn in WebGL or Context2D?
As in, it's not feasible to convert a blob to an image otherwise.
As in, it is possible to convert a video element to a texture otherwise by just directly via texImage2D.

There's of course a notion that a general concept like ImageBitmap should work consistently with different objects that serve similar purposes. However, as explained the implementations are not perfect until they're made perfect. If some implementation is made perfect it most likely means that other implementation somewhere else remains imperfect.

> I guess it's natural to see APIs how you want them to be, but for me it
> feels the intention of ImageBitmap is to keep hold of potentially-large,
> uncompressed images so they can be easily consumed in various places.

Sure, in abstract it can be that. Currently WebKit is not there, though.

And since we are not there, I'm trying to understand what is the use-case, e.g. is getting there the only way to solve the use-case.

> For me createImageBitmap means "please do any prep work / decoding / etc to
> get this source ready for efficient use in other web APIs - and off the main
> thread please, just let me know when it's ready".

Right. But from WebGL perspective that's what video element is -- for the simple case the prep work is already "done" and it's efficient to use already.

>From WebGL perspective you can upload the same video element 1,2 or 77 times in different contexts and textures and it's going to be observably as fast as it ever is going to be..

> Right now it seems consuming
> an ImageData is sufficiently more costly than a JS-side ArrayBuffer of
> pixels, which felt pretty unexpected to me.

Yes, due to various reasons, mostly that not all components are in GPU Process, ImageBitmap is not equivalent to a buffer that could be mapped to various GPU-based implementations (Context2D or WebGL). We're working on this part.

However, it is to prioritise the work, it would still be useful to understand if zero-overhead ImageBuffer is something that is a must for implementing the feature or a nice to have for implementing the feature.

I still do not understand this:
1) Uploading a video to a WebGL texture is fairly fast. Can it be used, can it not be used?
2) In which concrete webby use-cases it is useful that you have a handle to a ImageBitmap, and you use this handle twice
3) In which concrete webby use-cases it is useful that you have a handle to a ImageBitmap and you use this to different WebGL contexts?
4) In which concrete webby use-cases it is useful that you have a handle to a ImageBitmap and you use this to a Context2D and a WebGL context?

Not listing these as such that you'd need to answer these all, but these are just the questions I try to use to understand the prioritisation across all the things needing fixing.

-- 
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-unassigned/attachments/20220110/28047d6b/attachment.htm>


More information about the webkit-unassigned mailing list