[Webkit-unassigned] [Bug 184419] Write a script that detects chart changes by using v3 API.

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Sun Apr 29 17:23:51 PDT 2018


https://bugs.webkit.org/show_bug.cgi?id=184419

Ryosuke Niwa <rniwa at webkit.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
 Attachment #338979|review?                     |review-
              Flags|                            |

--- Comment #5 from Ryosuke Niwa <rniwa at webkit.org> ---
Comment on attachment 338979
  --> https://bugs.webkit.org/attachment.cgi?id=338979
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=338979&action=review

r- because there is a significant amount of restructuring of the code we need to do here.
Also r- because we definitely need tests for this new script.

> Websites/perf.webkit.org/public/v3/async-task.js:34
> +        return typeof module == 'undefined' && typeof window == 'undefined';

This doesn't make sense. "window" is always defined inside a browser outside a WebWorker.
r- because of this bug. Also, we clearly need a browser test to make sure this function returns true inside a browser.

> Websites/perf.webkit.org/tools/run-analysis.js:24
> +async function analysisLoop(options)

This function needs to have try-catch to make sure the script doesn't stop running after encountering the first exception.
But we shouldn't have try-catch anywhere else in the codebase.

> Websites/perf.webkit.org/tools/run-analysis.js:41
> +class Analyzer {

This is an extremely generic name... It doesn't even tell us what it analyzes.
How about MeasurementSetAnalyzer?

> Websites/perf.webkit.org/tools/run-analysis.js:45
> +        this._testingConfigurations = Analyzer.configurationsForTesting(manifest);

If we're exposing Analyzer.configurationsForTesting as a static method,
it would be cleaner for the caller (analysisLoop) to call this method,
and then Analyzer to simply take the result.

> Websites/perf.webkit.org/tools/run-analysis.js:52
> +        this._logger.info("Start detecting.");

"Start detecting" is probably not the most descriptive log.
It's probably better to say "starting analysis" since we're gonna add more analysis features to this script.

> Websites/perf.webkit.org/tools/run-analysis.js:57
> +    static configurationsForTesting(manifest)

Please add a FIXME here to share this code with DashboardPage.

> Websites/perf.webkit.org/tools/run-analysis.js:82
> +    async analyzeTestConfiguration(platform, metric)

This function should probably take a measurement set instead and be renamed to analyzeMeasurementSet.

> Websites/perf.webkit.org/tools/run-analysis.js:84
> +        const measurementSet = MeasurementSet.findSet(platform.id(), metric.id(), platform.lastModified(metric));

Why isn't done during configurationsForTesting?

> Websites/perf.webkit.org/tools/run-analysis.js:85
> +        this._logger.info(`\n==== Analyzing the last ${this._changeDetectionConfigs.maxDays} days: "${metric.fullName()}" on "${platform.name()}" ====\n`);

It's kind of silly to log the number of days we're analyzing since that's identical for all platforms & metrics.
Why don't we spit this out when we log "start detecting".

> Websites/perf.webkit.org/tools/run-analysis.js:86
> +        const endTime = Date.now();

This end time would be different for each platform/metric we fetch.
I think we should compute this once when we start the analysis so that they're consistent across platforms & metrics.

> Websites/perf.webkit.org/tools/run-analysis.js:93
> +        let segmentedValues = await measurementSet.fetchSegmentation('segmentTimeSeriesByMaximizingSchwarzCriterion', [], 'current', false);

Use const.

> Websites/perf.webkit.org/tools/run-analysis.js:100
> +        const ranges = Statistics.findRangesForChangeDetectionsWithWelchsTTest(rawValues, segmentedValues, 0.99).map((range) =>({

Either omit 0.99 or make it configurable.

> Websites/perf.webkit.org/tools/run-analysis.js:118
> +                const disjoint = range.endPoint.seriesIndex < taskStartPoint.seriesIndex

Nit: isDisjoint but with respect to what?
Maybe isDisjoint

> Websites/perf.webkit.org/tools/run-analysis.js:122
> +                if (!disjoint)
> +                    range.overlappingAnalysisTasks.push(task);

This is silly. We're collecting the list of overlapping analysis tasks just to filter ranges in the next statement.
We should simply invert the nesting of these two loops, and omit ranges right away.
Also, we don't have to find starting & ending points to check the overlapped-ness;
simply call startTime() / endTime() on the task itself. i.e.
ranges.filter((range) => {
    bool hasOverlappingTask = false;
    for (const task of analysisTasks) {
        bool taskEndsBeforeRangeStart = task.endTime() < range.startPoint.time;
        bool taskStartsAfterRangeEnd = range.endPoint.time < task.startTime();
        if (!(taskEndsBeforeRangeStart || taskStartsAfterRangeEnd))
            hasOverlappingTask = true;
    }
    return !hasOverlappingTask;
})

> Websites/perf.webkit.org/tools/run-analysis.js:128
> +        const filteredRanges = ranges.filter((range) => range.endPoint.time >= startTime && !range.overlappingAnalysisTasks.length)
> +            .sort((a, b) => a.endPoint.time - b.endPoint.time);

Sounds like we should do this filtering before filtering ranges based on overlapping tasks since the latter computation is a lot more expensive.
You can merge that to the above filter code I wrote.
Also, there is no point in sorting them based on start/end time since we're picking the one with the largest change in the next loop.

> Websites/perf.webkit.org/tools/run-analysis.js:130
> +        let summary, rangeWithMostSignificantChange = null, valueChangeSummaryForRangeWithMostSignificantChange = null;

Nit: Don't allocate multiple variables in a single statement like this.

> Websites/perf.webkit.org/tools/run-analysis.js:143
> +            const currentChangeIsMoreSignificant = !rangeWithMostSignificantChange
> +                || (valueChangeSummaryForRangeWithMostSignificantChange.changeType === valueChangeSummary.changeType
> +                    && Math.abs(valueChangeSummaryForRangeWithMostSignificantChange.relativeChange) < Math.abs(valueChangeSummary.relativeChange))
> +                || (valueChangeSummaryForRangeWithMostSignificantChange.changeType === progressionString
> +                    && valueChangeSummary.changeType === regressionString);

This is a very complicated logic.
1. We can instead, remember whether there was any regression in the previous filtering step,
2. Ignore all progressions if there were regressions
3. Sort them by diff.

But really, I'm not certain if we want to always prioritize regressions over progressions.
e.g. if we had 50% progression and 0.5% regression, we probably want to figure out where the progression occurred
since that probably means the test is broken somehow.

> Websites/perf.webkit.org/unit-tests/measurement-set-tests.js:44
> -            assert.equal(requests[0].url, '../data/measurement-set-1-1.json');
> +            assert.equal(requests[0].url, '/data/measurement-set-1-1.json');

Can we make this refactoring in a separate patch?

-- 
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-unassigned/attachments/20180430/afdab82d/attachment.html>


More information about the webkit-unassigned mailing list