[Webkit-unassigned] [Bug 184419] Write a script that detects chart changes by using v3 API.

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue May 1 20:00:56 PDT 2018


https://bugs.webkit.org/show_bug.cgi?id=184419

Ryosuke Niwa <rniwa at webkit.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
 Attachment #339255|review?                     |review+
              Flags|                            |

--- Comment #10 from Ryosuke Niwa <rniwa at webkit.org> ---
Comment on attachment 339255
  --> https://bugs.webkit.org/attachment.cgi?id=339255
Patch

View in context: https://bugs.webkit.org/attachment.cgi?id=339255&action=review

> Websites/perf.webkit.org/browser-tests/time-series-chart-tests.js:6
> +        expect(context.symbols.AsyncTask.isAvailable()).to.be(true);

This is nothing to do with time-series-chart. Please create a new test file instead.

> Websites/perf.webkit.org/tools/run-analysis.js:49
> +class MeasurementSetAnalyzer {

This class should be moved to its own file inside tools/js/

> Websites/perf.webkit.org/tools/run-analysis.js:52
> +        this._changeDetectionConfigs = changeDetectionConfigs;

We shouldn't be storing the JSON we read off of a file like this.
This would make it a lot harder to figure out what you can and can't specify in the configuration.
We should instead make MeasurementSetAnalyzer take each configuration as an argument,
and store each configuration value separately.

> Websites/perf.webkit.org/tools/run-analysis.js:103
> +        const rawValues = currentTimeSeries.values();

We should exit early without any error or warning when rawValues is empty.

> Websites/perf.webkit.org/tools/run-analysis.js:106
> +            this._logger.warn(`Failed fetching segmentations for ${metric.fullName()}" on "${platform.name()}"`);

Missing " after "for" and before the full metric name.

> Websites/perf.webkit.org/tools/run-analysis.js:113
> +            this._changeDetectionConfigs.tTestSignificance).map((range) =>({

I really don't think it makes sense to make this configurable.
We almost never used any other value but 0.99.
We can make it configurable in the future when we need to.
Just omit the argument for now.

> Websites/perf.webkit.org/tools/run-analysis.js:136
> +        filteredRanges.forEach((range) => {

I think for (const range of filteredRange) would probably read better
since the block here don't involve return, etc...

> Websites/perf.webkit.org/tools/run-analysis.js:140
> +            // Take a square root on progression relative value, so that we would not favor regression so
> +            // much that we miss some huge progressions.

Instead of this comment, just give it a more descriptive name.

> Websites/perf.webkit.org/tools/run-analysis.js:141
> +            const weightedSignificance = range.valueChangeSummary.changeType === regressionString ?

e.g. weightFavoringRegression.

> Websites/perf.webkit.org/tools/run-analysis.js:159
> +        this._logger.info(`Creating analysis task and confirming: "${summary}".`);

We should probably log this after we've created analysis task along with its ID.

> Websites/perf.webkit.org/tools/run-analysis.js:161
> +        await AnalysisTask.create(summary, rangeWithMostSignificantChange.startPoint, rangeWithMostSignificantChange.endPoint,
> +            'Confirm', this._changeDetectionConfigs.confirmTaskRepeatCount);

I don't think it makes sense to have a single iteration count.
If this is a temporary treatment, it's better to hard-code it here with a FIXME instead.

> Websites/perf.webkit.org/unit-tests/measurement-set-analyzer-tests.js:15
> +    beforeEach(() => {
> +        PrivilegedAPI.configure('test', 'password');
> +    });

Why do we need to configure this before each test case?

> Websites/perf.webkit.org/unit-tests/measurement-set-analyzer-tests.js:20
> +            assert.equal(configurations.length, 0);

Use deepEqual with [] so that when this test fails, we'd see the contents.

> Websites/perf.webkit.org/unit-tests/measurement-set-analyzer-tests.js:74
> +        it('should not analyze and show a warning message if failed to fetch segnmentation', async () => {

Nit: segnmentation -> segmentation.
I don't think this test case is valid, however. When there are no values in the time series, we shouldn't emit any error or warning.
Consequently, we need another test case which explicitly test segmentation to fail.

> Websites/perf.webkit.org/unit-tests/measurement-set-analyzer-tests.js:95
> +            assert.deepEqual(logger.warn_logs, ['Failed fetching segmentations for Some test : Some metric" on "Some platform"']);

I don't think we should be emitting this error. When the graph is empty, it's expected that there won't be anything to investigate.
We should be successfully exiting without logging any errors or warnings.

> Websites/perf.webkit.org/unit-tests/measurement-set-analyzer-tests.js:104
> +            const measurementSetAnalyzer = new MeasurementSetAnalyzer(changeDetectionConfigs, [measurementSet], logger);
> +            measurementSetAnalyzer._startTime = 4000;
> +            measurementSetAnalyzer._endTime = 5000;

Modifying private instance variables like this isn't great.
It would mean that whenever we modify MeasurementSetAnalyzer, we have to update all these tests as well.
We should make startTime & endTime simply arguments to MeasurementSetAnalyzer instead.

-- 
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-unassigned/attachments/20180502/42e6624b/attachment.html>


More information about the webkit-unassigned mailing list