[Webkit-unassigned] [Bug 69834] [GStreamer] WebAudio AudioFileReader implementation

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Thu Oct 13 01:03:15 PDT 2011


https://bugs.webkit.org/show_bug.cgi?id=69834





--- Comment #3 from Philippe Normand <pnormand at igalia.com>  2011-10-13 01:03:16 PST ---
(In reply to comment #2)
> (From update of attachment 110510 [details])
> View in context: https://bugs.webkit.org/attachment.cgi?id=110510&action=review
> 
> Nice. Below are some initial comments. I have not yet been able to look at the way you are setting up your pipeline in detail, but perhaps sometime later you can walk me through it. I have tried not to Martinize too intensely!
> 
> > Source/WebCore/platform/audio/gstreamer/AudioDestinationGStreamer.cpp:27
> > +#include "AudioDestinationGStreamer.h"
> > +
> > +#include "AudioChannel.h"
> > +#include "AudioSourceProvider.h"
> > +#include "GOwnPtr.h"
> 
> This should be one block of headers.
> 

Hum. nop! If I do that the style-bot complains:
Source/WebCore/platform/audio/gstreamer/AudioDestinationGStreamer.cpp:23:  You should add a blank line after implementation file's own header.  [build/include_order] [4]

> > Source/WebCore/platform/audio/gstreamer/AudioDestinationGStreamer.cpp:38
> > +    return 44100.0;
> 
> Omit the final .0 here.
> 

Ok

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:35
> > +#include "AudioBus.h"
> > +#include "AudioFileReader.h"
> > +#include "GOwnPtr.h"
> > +#include "GRefPtr.h"
> > +
> > +#include <gio/gio.h>
> > +#include <gst/app/gstappsink.h>
> > +#include <gst/audio/multichannel.h>
> > +#include <gst/base/gstbytereader.h>
> > +#include <gst/pbutils/pbutils.h>
> 
> Clump these together.
> 

Ok

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:37
> > +#define WEBAUDIO_GST_CAPS_TEMPLATE "audio/x-raw-float, rate=(int)%d, channels=(int)%d, endianness=(int)%d, width=(int)32"
> 
> String constants like this are typically defined as static const char*.  Since it's only used in getGStreamerAudioCaps it would be better to have it in the body of that method too.
> 

Ok!

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:43
> > +#if G_BYTE_ORDER == G_LITTLE_ENDIAN
> > +#define _webaudio_gst_byte_reader_get_float gst_byte_reader_get_float32_le
> > +#else
> > +#define _webaudio_gst_byte_reader_get_float gst_byte_reader_get_float32_be
> > +#endif
> 
> Instead of doing it this way, I prefer something like:
> 
> static inline bool getFloatFromByteReader(GstByteReader *reader, float *val)
> {
> #if G_BYTE_ORDER == G_LITTLE_ENDIAN
> 
> #else
> 
> #endif
> }
> 

Looks cleaner indeed :)

> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:56
> > +    gpointer buf = g_malloc0(size);
> > +    float* current = (float*) buf;
> 
> I think you can avoid this allocation (see below). If not, please use the WTF allocator here instead of the GLib one and a C++ cast.
> 

Oh yes! Seems indeed the allocation can be avoided.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:62
> > +    while (_webaudio_gst_byte_reader_get_float(byteReader, &value))
> > +        *current++ = (float) value;
> > +    gst_byte_reader_free(byteReader);
> 
> You shouldn't need to cast between float and gfloat. In fact, it might be better to use float here to begin with.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:69
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    reader->handleEndOfStream();
> 
> Might as well make this just one line. Alternatively, just accept an AudioFileReader as an argument. I'm almost certain you only need a static_cast here.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:75
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    return reader->handleBuffer(sink);
> 
> Ditto.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:81
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    return reader->handleMessage(message);
> 
> Ditto.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:88
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    reader->handleNewDeinterleavePad(pad);
> > +}
> 
> Ditto.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:94
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    reader->deinterleavePadsConfigured();
> > +}
> 
> Ditto.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:99
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    reader->plugDeinterleave(pad);
> 
> Ditto.
> 
> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:105
> > +    AudioFileReader* reader = reinterpret_cast<AudioFileReader*>(userData);
> > +    reader->start();
> 
> Ditto.
> 

Ok

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:169
> > +    const GValue* value = gst_structure_get_value(structure, "channel-positions");
> 
> Might want to give this a more descrptive name like channelPositionsValue. I had to look back to figure out what it was.
> 

Yep :)

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:173
> > +    if (channels && value && gst_value_is_fixed(value)) {
> 
> Is this an error situation? You don't return GST_FLOW_ERROR in the else case.
> 

Hum, you're right indeed, the else case should be there.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:180
> > +            mergedBuffer = gst_buffer_join(m_bufferFrontLeft, buffer);
> > +            m_bufferFrontLeft = gst_buffer_ref(mergedBuffer);
> 
> Why not just do m_bufferFrontLeft = gst_buffer_ref(gst_buffer_join(m_bufferFrontLeft, buffer)); and dispsense with mergedBuffer altogether?
> 

Yeah, why not ;)

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:215
> > +    gchar* padName = gst_pad_get_name(pad);
> 
> I recommend a GOwnPtr here.
> 

Ok

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:233
> > +    gst_object_unref(GST_OBJECT(sinkPad));
> 
> Please use GRefPtr in these cases.
> 

Will do. I actually have a patch somewhere for GRefPtrGStreamer about this, I'll send a separate patch to uniformize the GstPad usage in the graphics/gstreamer code.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:252
> > +    GstElement* audioconvert  = gst_element_factory_make("audioconvert", 0);
> > +    GstElement* audioresample = gst_element_factory_make("audioresample", 0);
> > +    GstElement* capsfilter = gst_element_factory_make("capsfilter", 0);
> > +    GstElement* deinterleave = gst_element_factory_make("deinterleave", "deinterleave");
> 
> Please use camelCase for these names.
> 

Hum ok, this is not what I use to do in other GStreamer code though. Usually the convention is that the variable name matches the element name when possible.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:258
> > +    g_object_set(capsfilter, "caps", getGStreamerAudioCaps(2, m_sampleRate), NULL);
> 
> Does gst_caps_from_string return a newly-allocated GstCaps? If so, isn't this a memory leak?
> 

Heh. Right, the caps needs to be unref'd. Same in handleNewDeinterleavePad().

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:313
> > +    GOwnPtr<GError> error;
> > +    gstInitialized = gst_init_check(0, 0, &error.outPtr());
> > +    if (!gstInitialized)
> > +        return nullptr;
> 
> Will this interact nicely with the version of this in MediaPlayerPrivateGStreamer.cpp? Nice use of nullptr!
> 

I believe so! I'll do some tests about this in a page containing media elements and WebAudio code.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:316
> > +    // properly dispatch its messages to our end.
> 
> Nit: dispatches
> 

Ok

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:319
> > +    m_context = g_main_context_new();
> > +    g_main_context_push_thread_default(m_context);
> > +    m_loop = g_main_loop_new(m_context, FALSE);
> 
> I guess this blocks the main loop. :(
> 

Well yeah indeed, the createBus needs to block, not much I can do about this I'm afraid.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:327
> > +    g_source_set_callback(timeoutSource, (GSourceFunc) enteredMainLoopCallback, this, 0);
> 
> Please use reinterpret_cast instead of a C style cast.
> 

Oh, right.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:333
> > +    guint size = GST_BUFFER_SIZE(m_bufferFrontLeft);
> 
> We try not to use the simple Glib wrappers for C types that are local. Thus this can be unsigned.
> 

Ok.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:344
> > +    float* bufferLeft = getGStreamerAudioData(m_bufferFrontLeft, size);
> > +    float* bufferRight = getGStreamerAudioData(m_bufferFrontRight, size);
> > +
> > +    memcpy(audioBus->channel(0)->data(), bufferLeft, size);
> > +    if (!mixToMono)
> > +        memcpy(audioBus->channel(1)->data(), bufferRight, size);
> > +
> 
> I think here you should pass (audioBus->channel(0)->data() directly to getGStreamerAudioData and avoid a memory copy and duplicate allocation entirely.
> 

Very good point indeed! This should boost the performance a bit :)

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:360
> > +    AudioFileReader* reader = new AudioFileReader(filePath);
> > +    return reader->createBus(sampleRate, mixToMono);
> 
> Aren't you leaking the reader here? Why not just stack allocate it?
> 

Yeah, good point.

> > Source/WebCore/platform/audio/gstreamer/AudioFileReaderGStreamer.cpp:366
> > +    AudioFileReader* reader = new AudioFileReader(data, dataSize);
> > +    return reader->createBus(sampleRate, mixToMono);
> 
> Ditto. This factory seems to be unused currently...
> 

Well createBusFromAudioFile is used in HRTFElevation at least. createBusFromInMemoryAudioFile is called in the AudioBuffer code.

> > Source/WebCore/platform/audio/gtk/AudioBusGtk.cpp:37
> > +    return createBusFromAudioFile(absoluteFilename.get(), false, sampleRate);
> 
> You only use the createBusFromAudioFile in one place and it's just two lines. I think it's clearer to just put the code here, unless you plan to use it a lot in followup patches.

I have a followup patch about this yeah. 
Anyway createBusFromAudioFile is defined in AudioFileReader.h so we need to implement it.

Moreover I wanted to split the "Gtk" part from the GStreamer code. Later on other ports using this WebAudio implementation will most likely only need a port specific AudioBus implementation, apart from the websettings and DRT stuff.

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list