[Webkit-unassigned] [Bug 136631] Buffer images on web process side
bugzilla-daemon at webkit.org
bugzilla-daemon at webkit.org
Mon Sep 8 09:21:44 PDT 2014
Darin Adler <darin at apple.com> changed:
What |Removed |Added
Attachment #237789|1 |0
is obsolete| |
--- Comment #3 from Darin Adler <darin at apple.com> 2014-09-08 09:21:46 PST ---
(From update of attachment 237789)
View in context: https://bugs.webkit.org/attachment.cgi?id=237789&action=review
> + , m_bufferingTimer(this, &NetworkResourceLoader::bufferingTimerFired)
I’m surprised we still only have the static member function version of this, not the std::function version, but I guess that’s where we still are!
> + // FIXME: In reality we always get -1.
Would be nice to have a slightly less mysterious version of this comment. What’s the right thing to fix this, for example? Stop pretending we have the encoded data length?
Also it’s kind of lame design to have an optional number where “I don’t know” is represented by -1. Seems a little old fashioned to me. Anyway, you’re not adding this, just commenting on it.
If this entire thing doesn’t work anyway, then why add m_bufferDataEncodedDataLength at all? All the code that manipulates it seems confusing.
> + if (encodedDataLength > 0)
> + m_bufferDataEncodedDataLength += encodedDataLength;
I don’t understand why it’s OK for m_bufferDataEncodedDataLength to just be wrong when encodedDataLength is -1. There doesn’t seem to be a flag here saying “m_bufferDataEncodedDataLength is meaningless”; instead we are using 0 as a magic number to mean that. It seems better to have a “unknown” value and set m_bufferDataEncodedDataLength to that if we ever get a buffer with a -1.
> - sendBuffer(m_bufferedData.get(), m_bufferedData->size());
> + sendBuffer(m_bufferedData.get(), m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size());
I find this confusing. If we are sending a buffer and we know its size, why would we ever use m_bufferDataEncodedDataLength instead? Would it even be correct to send that instead of the actual buffer size? Would it be helpful in some way? Change log does not shed any light on this.
> + if (m_maximumBufferingTime == 0_ms || m_maximumBufferingTime == std::chrono::milliseconds::max())
> + return;
I am not sure that either of these checks is needed.
I understand that 0ms is a special magic value to say that we should not buffer at all. But it seems that in that case we would not create m_bufferedData and we would not get here. So I’d think we could assert that instead of doing an early return. If we did want an early return, I would suggest checking m_bufferedData rather than checking m_maximumBufferingTime directly.
Do we need the max optimization? It seems sort of elegant to never call start on the timer, but is it important to optimize? Maybe to keep from putting unnecessary timers in the timer heap?
> + size_t encodedDataLength = m_bufferDataEncodedDataLength ? m_bufferDataEncodedDataLength : m_bufferedData->size();
I don’t understand this rule here either. Same questions as above in the sendBuffer function.
I also wish we could share more code between this and NetworkResourceLoader::sendBuffer and its caller. Both are sending the same message with the same rule about encoded data length.
> + , maximumBufferingTime(false)
I think we want 0_ms here, since maximumBufferingTime is a time, not a boolean.
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
More information about the webkit-unassigned